Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Advertise on LowEndTalk.com
ApacheBench results - which setup should I choose?
New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

ApacheBench results - which setup should I choose?

pkrpkr Member

I ran "ab" test (# ab -n 2500 -c 3 -k https://example.com) for my domain with two different settings.
1. Apache with MPM_prefork + PHPfcgid + Nginx + MySQL5.7

Concurrency Level:      3
Time taken for tests:   169.366 seconds
Complete requests:      2500
Failed requests:        2476
   (Connect: 0, Receive: 0, Length: 2476, Exceptions: 0)
Keep-Alive requests:    0
Total transferred:      51210612 bytes
HTML transferred:       49680612 bytes
Requests per second:    14.76 [#/sec] (mean)
Time per request:       203.239 [ms] (mean)
Time per request:       67.746 [ms] (mean, across all concurrent requests)
Transfer rate:          295.28 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        8   21  16.1     16     239
Processing:    49  182 106.8    162    1943
Waiting:       49  181 106.4    161    1943
Total:         61  203 114.9    181    2001

Percentage of the requests served within a certain time (ms)
  50%    181
  66%    212
  75%    235
  80%    252
  90%    311
  95%    374
  98%    488
  99%    581
 100%   2001 (longest request)

2. With MPM_event + PHP-FPM + Nginx + MariaDB10.1

Concurrency Level:      3
Time taken for tests:   268.931 seconds
Complete requests:      2500
Failed requests:        2488
   (Connect: 0, Receive: 0, Length: 2488, Exceptions: 0)
Keep-Alive requests:    0
Total transferred:      51208138 bytes
HTML transferred:       49678138 bytes
Requests per second:    9.30 [#/sec] (mean)
Time per request:       322.717 [ms] (mean)
Time per request:       107.572 [ms] (mean, across all concurrent requests)
Transfer rate:          185.95 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       15   21   5.0     20      74
Processing:   203  301  53.2    295     722
Waiting:      202  301  53.1    294     720
Total:        220  322  54.3    315     752

Percentage of the requests served within a certain time (ms)
  50%    315
  66%    343
  75%    359
  80%    368
  90%    392
  95%    416
  98%    448
  99%    472
 100%    752 (longest request)

Why the results using PHP-fcgid are better than results using PHP-FPM?
Is it possible that MariaDB is affecting results?

People who work sincerely are the happiest - discoverbits

Comments

  • seriesnseriesn Member, Provider
    edited May 19

    It is too late for me to go technical but from a crapton of recent testing and debugging, fcgi+apache2 combo is pretty solid and probably one of the best combo. Add some redis caching with it if you can :)

    Thanked by 2RedSox pkr
  • nemnem Member, Provider

    (Connect: 0, Receive: 0, Length: 2476, Exceptions: 0)

    Your results are inconsistent because the size deviates across requests. Look into why results are inconsistent. A test has no merit if it has no reproducibility.

    Keep-Alive requests: 0

    Neither implementation supports keepalive, so -k usage is meaningless. Look into why keepalive is off for both.

    NGINX w/ keepalives + PHP-FPM (static pool) would give you faster performance than Apache w/ keepalives + Event MPM + PHP-FPM (static pool) but the difference when PHP is your chokepoint is minimal if neither is properly configured. Focus less on squeezing out performance and more on configuration, then when you have one you like, figure out how to maximize throughput.

    Thanked by 2TimboJones pkr
  • ChocowebChocoweb Member

    @nem said:
    Neither implementation supports keepalive, so -k usage is meaningless. Look into why keepalive is off for both.

    If the test is running with HTTPS, then -k may reduce the huge handshake overhead?

  • nemnem Member, Provider
    edited May 20

    @Chocoweb said:

    @nem said:
    Neither implementation supports keepalive, so -k usage is meaningless. Look into why keepalive is off for both.

    If the test is running with HTTPS, then -k may reduce the huge handshake overhead?

    Let's test it. Basic $12 high-frequency Vultr instance based off ApisCP, all fixins enabled. Stapling included.

    Simple test, putting the platform in Benchmarking mode. Establish a baseline on a simple HTML page without any additional intermediary processing...

    with keepalive:

    echo foo > /var/www/html/bar
    ab -n 1000 -c 10 -k http://testing.apisnetworks.com/bar
    
    Concurrency Level:      10
    Time taken for tests:   0.332 seconds
    Complete requests:      1000
    Failed requests:        0
    Keep-Alive requests:    996
    Total transferred:      243040 bytes
    HTML transferred:       4000 bytes
    Requests per second:    3011.02 [#/sec] (mean)
    

    without:

    Concurrency Level:      10
    Time taken for tests:   0.519 seconds
    Complete requests:      1000
    Failed requests:        0
    Total transferred:      233000 bytes
    HTML transferred:       4000 bytes
    Requests per second:    1927.62 [#/sec] (mean)
    

    Let's test without SSL both with and without keepalive

    with:

    Concurrency Level:      10
    Time taken for tests:   0.359 seconds
    Complete requests:      1000
    Failed requests:        0
    Keep-Alive requests:    999
    Total transferred:      243174 bytes
    HTML transferred:       4000 bytes
    Requests per second:    2788.04 [#/sec] (mean)
    
    

    Without:

    Concurrency Level:      10
    Time taken for tests:   2.317 seconds
    Complete requests:      1000
    Failed requests:        0
    Total transferred:      233000 bytes
    HTML transferred:       4000 bytes
    Requests per second:    431.65 [#/sec] (mean)
    

    In summary, Apache without keepalive without SSL is 36% slower than with. Apache over SSL is 84% slower without keepalive.

    Do the needful and configure your HTTP server correctly.

    Edit: to note this is on a CentOS 8 instance with TLS v1.3.

  • pkrpkr Member

    Keeplive is enabled on my VPS.
    By changing, pm=ondemand to pm=static in the fpm configuration file improved the performance, but FPM+MPM_event is still not up to the mark.
    I talked with one sysadmin and he said that if your domain is getting small traffic, MPM_prefork+FCGI will do better than MPM_event+FPM. That's why in my test,
    MPM_event+FPM finished 100% requests in 752ms whereas MPM_prefork+FCGI took 2001ms. I am not sure if he is right.

    Timeout 45
    KeepAlive On
    MaxKeepAliveRequests 300
    KeepAliveTimeout 10

    People who work sincerely are the happiest - discoverbits

  • nemnem Member, Provider

    Keeplive is enabled on my VPS.

    Double-check your implementation. ab should report > 0 keepalives if true unless there's something screwy with your ab release.

    I talked with one sysadmin and he said that if your domain is getting small traffic, MPM_prefork+FCGI will do better than MPM_event+FPM.

    I'm not sure I agree with that from an architectural standpoint. Cloning a process image to handle a connection is slow regardless of what happens next; that's the basic operation of prefork. Threads (Event/Worker MPM) will always be faster because you don't have to flag parts of the parent as copy-on-write nor deal with IPC. PHP-FPM is a specialized implementation of the FastCGI protocol for PHP. It flows straight to PHP from Apache with a ProxyPass, nothing more is needed.

  • pkrpkr Member
    edited May 21

    @nem said:
    I'm not sure I agree with that from an architectural standpoint. Cloning a process image to handle a connection is slow regardless of what happens next; that's the basic operation of prefork. Threads (Event/Worker MPM) will always be faster because you don't have to flag parts of the parent as copy-on-write nor deal with IPC. PHP-FPM is a specialized implementation of the FastCGI protocol for PHP. It flows straight to PHP from Apache with a ProxyPass, nothing more is needed.

    I was also not convinced by his answer. So, I thought to dig more to find the root cause of the issue. I am not sure if I found the answer. But now my MPM_event+FPM setting is doing better than the MPM_prefork+FCGI setting with a fraction of CPU (~55% less CPU usage).

    I was using Ubuntu 18 and the SSL handshake was the problem as it was taking a huge time. I installed Debian 9 and repeated the test, now it's giving the expected results.
    For "ab -n 10000 -c 15",
    MPM_event+FPM with pm=static:

    Concurrency Level: 15
    Time taken for tests: 91.692 seconds
    Complete requests: 10000
    Failed requests: 7723
    (Connect: 0, Receive: 0, Length: 7723, Exceptions: 0)
    Keep-Alive requests: 0
    Total transferred: 997455040 bytes
    HTML transferred: 995695040 bytes
    Requests per second: 109.06 [#/sec] (mean)
    Time per request: 137.537 [ms] (mean)
    Time per request: 9.169 [ms] (mean, across all concurrent requests)
    Transfer rate: 10623.40 [Kbytes/sec] received

    Connection Times (ms)
    min mean[+/-sd] median max
    Connect: 15 68 37.3 59 624
    Processing: 4 69 68.6 58 1760
    Waiting: 3 49 56.3 39 1729
    Total: 22 137 83.6 126 1946

    Percentage of the requests served within a certain time (ms)
    50% 126
    66% 151
    75% 167
    80% 179
    90% 213
    95% 244
    98% 288
    99% 349
    100% 1946 (longest request)

    MPM_prefork+FCGI:

    Concurrency Level: 15
    Time taken for tests: 111.997 seconds
    Complete requests: 10000
    Failed requests: 9718
    (Connect: 0, Receive: 0, Length: 9718, Exceptions: 0)
    Keep-Alive requests: 0
    Total transferred: 967098088 bytes
    HTML transferred: 965338088 bytes
    Requests per second: 89.29 [#/sec] (mean)
    Time per request: 167.995 [ms] (mean)
    Time per request: 11.200 [ms] (mean, across all concurrent requests)
    Transfer rate: 8432.69 [Kbytes/sec] received

    Connection Times (ms)
    min mean[+/-sd] median max
    Connect: 8 72 67.2 52 756
    Processing: 4 95 95.3 71 1394
    Waiting: 0 83 87.0 61 1375
    Total: 15 168 135.8 133 1627

    Percentage of the requests served within a certain time (ms)
    50% 133
    66% 174
    75% 206
    80% 228
    90% 306
    95% 399
    98% 528
    99% 689
    100% 1627 (longest request)

    I tried different values of n/c, for every combination, MPM_event+FPM was the winner.
    I am not an OS expert. So, I cannot say what was the problem with SSL handshake on Ubuntu 18. But I am going back to Debian 9; I will miss Apache 2.4.38 as Debian 9 has 2.4.25.

    People who work sincerely are the happiest - discoverbits

Sign In or Register to comment.