Load testing

Load testing

squibssquibs Member

Just wondering how best to load test to simulate 10k simultaneous visitors, given that this is quite a different metric than 10k simultaneous requests. Anyone got any tools and/or guidelines? I've tried loadimpact before, but it's pricey and I'm not sure how well it compares to a load from real users.

¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦

Comments

  • graphicgraphic Member

    Acunetix maybe?

    End-to-end encrypted cloud storage, this link gives you 6gb free storage and me +1gb extra storage : sync.com

    Thanked by 1squibs
  • CConnerCConner Member, Provider
    edited May 16

    https://loader.io allows you to test with up to 10k clients for free as many times as you like.

  • YmpkerYmpker Member

    Passler.com webstress software (free) can simulate 4K/Windows device/vps. So if you run it on 3 Windows Server VPS you could get up to 12k.

    Thanked by 1squibs
  • ZerpyZerpy Member

    https://www.redline13.com/blog/ uses amazon spot instances - aka it's super cheap :-)

    In their amazing slider they even list some examples, like jmeter test for 10k users, and 100 m3.medium servers was a total of like $6-7

    When using the tool, it tries to select the most optimal instance size and count depending on your settings.

    Thanked by 2squibs JackHadrill
  • squibssquibs Member

    Thanks folks. Some great solutions I had never heard of.

    ¦̵̱ ̵̱ ̵̱ ̵̱ ̵̱(̢ ̡͇̅└͇̅┘͇̅ (▤8כ−◦

  • eva2000eva2000 Member

    Remember lots of load testings are HTTP/1.1 only and don't support HTTP/2 HTTPS so if you're load testing HTTP/HTTPS keep in mind with HTTPS you will be testing HTTP/1.1 HTTPS and not HTTP/2 HTTPS. Lots of web servers are now moving to HTTP/2 HTTPS so testing probably should account for that. Only HTTP/2 supported load tester I know of is nghttp2's h2load https://nghttp2.org/documentation/h2load-howto.html

    I use use h2load for HTTP/2 HTTPS load testing i.e. Caddy vs Nginx HTTP/2 HTTPS load testing https://community.centminmod.com/threads/caddy-http-2-server-benchmarks-part-2.12873/

    For HTTP/1.1 load testing I use siege and my forked version of wrk at https://github.com/centminmod/wrk/tree/centminmod

    Also check out locust.io need to setup yourself though - did some tests at http://wordpress7.centminmod.com/132/wordpress-super-cache-benchmark-locust-io-load-testing/ - with Vultr and Packet.net bare metal dedicated on hourly billing you could whip up a good cluster for locust.io

    There use to be Blitz.io but they're closed down now i.e. http://wordpress7.centminmod.com/186/php-7-0-1-redis-caching-for-wordpress/

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod LEMP Stack Quick Install Guide
    Thanked by 1Chuck
  • qtwrkqtwrk Member

    @eva2000 said:

    I use use h2load for HTTP/2 HTTPS load testing i.e. Caddy vs Nginx HTTP/2 HTTPS load testing https://community.centminmod.com/threads/caddy-http-2-server-benchmarks-part-2.12873/

    Just installed h2load , but I noticed something

    when I curl it , or ab it , the transfer size is like 500 KB , but if I do h2load , it only reports as 12 KB traffic

    my command was

    h2load -n1 -c1 -m1 https://XXXX/

    could you please give me any hint about it ?

    netcup 5 euro coupon: 36nc15324722143 36nc15324722144

  • eva2000eva2000 Member
    edited May 17

    Got specific example commands and output to compare ?

    HTTP/2 HTTPS has HPACK header compression unlike HTTP/1.1 HTTPS so if you use non-HTTP/2 supported tools like ab or some curl versions you'd be connecting as HTTP/1.1 HTTPS not HTTP/2 HTTPS

    Though ab and h2load transfers look about right to me for non gzip requests

    h2load --version
    h2load nghttp2/1.33.0-DEV
    

    ab = Total 43180 bytes of which 42460 bytes was HTML

    ab -c1 -n1 https://domain.com/
    This is ApacheBench, Version 2.3 <$Revision: 1796539 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking domain.com (be patient).....done
    
    
    Server Software:        cloudflare
    Server Hostname:        domain.com
    Server Port:            443
    SSL/TLS Protocol:       TLSv1.2,ECDHE-ECDSA-AES128-GCM-SHA256,256,128
    TLS Server Name:        domain.com
    
    Document Path:          /
    Document Length:        42460 bytes
    
    Concurrency Level:      1
    Time taken for tests:   0.116 seconds
    Complete requests:      1
    Failed requests:        0
    Total transferred:      43180 bytes
    HTML transferred:       42460 bytes
    Requests per second:    8.63 [#/sec] (mean)
    Time per request:       115.893 [ms] (mean)
    Time per request:       115.893 [ms] (mean, across all concurrent requests)
    Transfer rate:          363.85 [Kbytes/sec] received
    

    h2load = traffic total 43044 bytes of which 462 bytes are headers and 42461 bytes is data

    h2load -n1 -c1 https://domain.com/
    starting benchmark...
    spawning thread #0: 1 total client(s). 1 total requests
    TLS Protocol: TLSv1.2
    Cipher: ECDHE-ECDSA-CHACHA20-POLY1305
    Server Temp Key: X25519 253 bits
    Application protocol: h2
    progress: 100% done
    
    finished in 119.23ms, 8.39 req/s, 352.56KB/s
    requests: 1 total, 1 started, 1 done, 1 succeeded, 0 failed, 0 errored, 0 timeout
    status codes: 1 2xx, 0 3xx, 0 4xx, 0 5xx
    traffic: 42.04KB (43044) total, 462B (462) headers (space savings 25.48%), 41.47KB (42461) data
                         min         max         mean         sd        +/- sd
    time for request:   103.38ms    103.38ms    103.38ms         0us   100.00%
    time for connect:    14.88ms     14.88ms     14.88ms         0us   100.00%
    time to 1st byte:   113.35ms    113.35ms    113.35ms         0us   100.00%
    req/s           :       8.43        8.43        8.43        0.00   100.00%
    
    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod LEMP Stack Quick Install Guide
Sign In or Register to comment.