Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


HTTP/2 HTTPS Benchmarking Litespeed vs Nginx - make use of your idle servers
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

HTTP/2 HTTPS Benchmarking Litespeed vs Nginx - make use of your idle servers

Thought this would be of interesting for all the folks here with idle servers doing nothing :) Came across WHT thread at https://www.webhostingtalk.com/showthread.php?t=1775139 which asked whether HTTP/2 HTTPS benchmarks posted at https://http2benchmark.org/ are real and I chimed in on that thread with my thoughts.

TLDR

  • They created a Github repo suite with all scripts to install both client and server side applications used for benchmarking including litespeed and nginx. I only tested on CentOS 7 but they support Ubuntu too. Repo at https://github.com/http2benchmark/http2benchmark
  • I wasn't happy with the test parameters and configurations tested for Litespeed vs Nginx. They're testing a slight more optimised Litespeed 5.4 configuration that one used by Litespeed 5.4 install out of the box versus nginx.org repo stable Nginx 1.16 with normal defaults which aren't optimal and miss some key performance related options. Also there's no TCP/Kernel optimisations on either server/client servers out of the box.
  • So I forked their repo and extended their script/suite at https://github.com/centminmod/http2benchmark/tree/extended-tests - the added stuff is explained on the extended-test branch readme.
  • Server to Client tests maybe restrained by the network connectivity of your VPS i.e. 250Mbits to 1Gbits compared if you have 40+ Gbps pipe to test with.

So benchmark away folks ! ^_^

«1

Comments

  • Waiting for some results. The suite is too heavy for my tiny idle servers.

  • What's the conclusion?

  • I don't understand what test you want here. Hosing our providers' internet pipes won't do anything about benchmarking the software. Hetzner Cloud no longer has 10gbps uplinks apparently. Is there anything we can test that will reveal anything not already known? If Litespeed is beating nginx somehow, can we fix it? Should we really even care about nginx these days? Litespeed is more in the Apache vein, I thought. So maybe there is a way to graft an async approach onto Apache.

    Thanked by 1kkrajk
  • LeviLevi Member

    @eva2000 Is there any chance that you have some sort of fetish on benchmarking servers and there is, let's say, 'video' matterial about what's happening with you during benchmark and after it?

    Thanked by 2kkrajk desperand
  • eva2000eva2000 Veteran
    edited August 2019

    @cybertech said:
    What's the conclusion?

    Litespeed 5.4 is faster than Litespeed 5.3 and faster than their tested default distro installed Nginx 1.16 which runs default non-optimal configuration. But Nginx can be configured and tuned to match Litespeed for some usage scenarios if you know how to configure Nginx. If you don't, then Litespeed 5.4 probably better fit.

    willie said: Is there anything we can test that will reveal anything not already known?

    True to some extent, i'm limited by 1Gbps so thought some folks here including web hosts who have access to >1-40Gbps pipes might want to test especially on my forked version which adds a fairer comparison of a more optimal Nginx wordpress setup to test (coachbloggzip test target) which shows a much closer race between Litespeed 5.4 and Nginx when properly configured.

    @LTniger said:
    @eva2000 Is there any chance that you have some sort of fetish on benchmarking servers and there is, let's say, 'video' matterial about what's happening with you during benchmark and after it?

    ^_^ wouldn't you like to know :grin:

    From https://www.webhostingtalk.com/showthread.php?t=1775139&p=10167968#post10167968 example

    The coachbloggzip test target outline
    coachbloggzip - precompress gzip wordpress OceanWP Coach theme test blog static html version simulating wordpress cache plugins which do full page static html caching i.e. Cache Enabler wordpress plugin + Autoptimize wordpress plugin + Autoptimize Gzip companion plugin. Such combo allows Wordpress site to do full page static html caching with pre-compressed gzipped static assets for html, css and js which can leverage nginx gzip_static directive.

    h2load - coachbloggzip
    lsws 5.4        finished in       5.80 seconds,   17245.50 req/s,     108.85 MB/s,          0 failures
    nginx 1.17.2    finished in       5.88 seconds,   17020.40 req/s,     109.60 MB/s,          0 failures
    
    h2load-low - coachbloggzip
    lsws 5.4        finished in       0.40 seconds,   12396.10 req/s,      78.29 MB/s,          0 failures
    nginx 1.17.2    finished in       0.41 seconds,   12317.60 req/s,      79.33 MB/s,          0 failures
    
    h2load-m80 - coachbloggzip
    lsws 5.4        finished in       5.81 seconds,   17208.90 req/s,     108.62 MB/s,          0 failures
    nginx 1.17.2    finished in       6.32 seconds,   15815.80 req/s,     101.84 MB/s,          0 failures
    
    Thanked by 1mrTom
  • @cybertech said:
    What's the conclusion?

    I've tried to install the test suite on my Debian 9/Ubuntu 16.04 VPS but it failed.
    So I've spined up two CentOS 7 VMs on LunaNode, since I have many dollars of credits in it.
    The script of "coachblog" is broken, so there's no result of it.
    Here's my result.

    https://docs.google.com/spreadsheets/d/1pYBbvBdErKOO4vkrZmueZY_iWwT4Js8RnuJ1LknJmrk/edit?usp=sharing

  • Chocoweb said: The script of "coachblog" is broken, so there's no result of it.

    Here's my result.

    will double check thanks

  • @eva2000 said:

    Chocoweb said: The script of "coachblog" is broken, so there's no result of it.

    Here's my result.

    will double check thanks

    Should be some issue of file path. During server install, the script says cp: path not found (sth like that)
    I forgot to save the server install log, so sorry

  • eva2000eva2000 Veteran
    edited August 2019

    Chocoweb said: Should be some issue of file path. During server install, the script says cp: path not found (sth like that)
    I forgot to save the server install log, so sorry

    no worries was to to typo in directory name for download directory :)

    So I updated my http2benchmark extended-tests branch fork at https://github.com/centminmod/http2benchmark/tree/extended-tests to fix the setup/server/server.sh setup for coachblog and coachbloggzip wordpress test targets so you can update server.sh and re-run server.sh should reinstall everything - ignore error about httpd/apache as that is from original http2benchmark script and it's because CentOS 7 tests don't install Apache right now.

    also a fix for setup/client/client.sh so re-run on client side

    example results https://gist.github.com/centminmod/6980694c38dc39c5fc9325b581cfd036

  • jsgjsg Member, Resident Benchmarker

    First thanks a lot for your efforts.

    But I don't care about it and do not even take it serious. Reasons (among others):

    • I don't know what a DO droplet means. E.g., is that dedicated cores? I guess not. If so that taints your benchmarks.
    • I don't care about versions of an http server that are not the available stable ones provided by the major distros.
    • I don't care about config optimization. The large majority uses standard configs with adaptations for their site.
    • I don't see progress in adding multiple small static files.

    Sure, you or me will massage the config as well as the .configure to get the optimum for any given site and constellation, but most users don't. Keep in mind for whom e.g. Ubuntu or CentOs are made.

    And then there is the question about http/2. I experimented a lot based on large promises from the http/2 crowd but nowadays my sites are back to 1.1 because (among other reasons) I strongly dislike the "httpS only and everywhere!" hype and because, in fact http/2 is not significantly faster than http/1.1. In fact (due to the httpS hype) http/2 sites are often slower.

    As for your benchmark I absolutely do not intend to be harsh to you but I couldn't help but to get the impression that the title of your benchmark should be "Desperate attempts to somehow make nginx vs OLS look better".
    Well noted, I run both (and current versions too) on diverse servers (dedi and VPS) and I see OLS to consistently beat nginx, often even brutally (e.g. with dynamic PHP sites).

    About the only thing were I perceive nginx to be better is config syntax. Apache (XML) syntax is a PITA and wasting space. On the other hand OLS comes with some kind of config and stats GUI.

    So in summary when setting up new servers I tend to go OLS - and with good reasons. Serving e.g. WP much faster is one of the reasons.

  • HxxxHxxx Member

    DO = Digital Ocean.
    Droplet = VPS.

    @jsg said:

  • jsgjsg Member, Resident Benchmarker

    @Hxxx said:
    DO = Digital Ocean.
    Droplet = VPS.

    I know that. But VPS come in many flavours and usually don't have a dedicated core. And even if they do they still are a poor base for that type of benchmarking. When not comparing VPSs but rather software and in a performance vs performance test, one should have a stable, solid base (read: dedi) and one should properly describe the base (e.g. what kind of disks, controller, relevant processor flags, etc).

  • Glad to see LiteSpeed performing so well

  • eva2000eva2000 Veteran
    edited August 2019

    @jsg said:
    *.

    As for your benchmark I absolutely do not intend to be harsh to you but I couldn't help but to get the impression that the title of your benchmark should be "Desperate attempts to somehow make nginx vs OLS look better".
    Well noted, I run both (and current versions too) on diverse servers (dedi and VPS) and I see OLS to consistently beat nginx, often even brutally (e.g. with dynamic PHP sites).

    About the only thing were I perceive nginx to be better is config syntax. Apache (XML) syntax is a PITA and wasting space. On the other hand OLS comes with some kind of config and stats GUI.

    So in summary when setting up new servers I tend to go OLS - and with good reasons. Serving e.g. WP much faster is one of the reasons.

    Yes litespeed's LSAPI based php is definitely faster for non-cached dynamic php than php-fpm. The purpose of my forked benchmarks are mentioned in git repo Readme and WHT thread to highlight that yes litespeed can be faster but depending on how nginx/php-fpm is install and configured it can be a much closer race. Case in point my openlitespeed vs Centmin Mod WordPress benchmarks with similar WordPress cache setup part 2 with part 1 linked at https://community.centminmod.com/threads/wordpress-webpagetest-pagespeed-comparison-for-cyberpanel-1-7-rc-openlitespeed-vs-centmin-mod-lemp.15211/. Of course those benchmarks are fairly old now so I will eventually revisit such tests as I plan to add litespeed/openlitespeed support to Centmin Mod stack as well.

    And specifically regarding pre-gzip and nginx gzip_static tests https://community.centminmod.com/threads/wordpress-webpagetest-pagespeed-comparison-for-cyberpanel-1-7-rc-openlitespeed-vs-centmin-mod-lemp.15211/#post-65227

  • jsgjsg Member, Resident Benchmarker
    edited August 2019

    @eva2000

    I appreciate you taking my post so constructively as I did indeed in no way intend to attack you.

    Notes:

    I don't care about LSAPI and I don't believe in miracles or nonsensical optimizations. The only real and effective way to optimize PHP is to get rid of it and to use e.g. lua - and not to squeeze out some more % with trickery (like LSAPI).
    So my WP runs with php-fpm and not with LSAPI.

    I didn't look closely at the code of OLS or of nginx, but I merely fixed some small issues and things I disliked. So I can't make a solid statement as to the "why?" but I can clearly state that looking at the current official (stable and avail. in distros) versions OLS is clearly faster with dynamic content (read PHP sh_t like WP). One likely candidate would be OLS running a better (tighter, better controlled) event loop. At least that's what my tests with libev and libuv suggest. One striking example is dyn. alloc. structures (typ. per connection) vs. pre-allocating them and managing them yourself. I saw major performance differences and I optimized some software projects considerably (in terms of performance) by merely enhancing event loop management and handling.

    Be that as it may be, using what comes pre-packaged by the distros and configuring the servers reasonably smart, which probably is what could be called the positive case for most actual installations out there, OLS clearly outperforms nginx. And well noted, I did not like to come to that conclusion because I actually like nginx and would have loved it to come out at least close to OLS.

    And that, the 85% of users/installations IMO is the relevant measure, not some hand tuned setup.

    That said all those benchmarks are probably in vain anyway because for most users even a standard not exactly smart config is plenty good enough, simply because the vast majority simply don't have websites needing to serve more than a couple of dozen requests per sec if even that. So 97+% of all nginx lovers can stay with nginx anyway and need not care about OLS being quite a bit faster.

    Thanked by 1Daniel15
  • eva2000eva2000 Veteran
    edited August 2019

    Litespeed LSAPI PHP isn't trickery - used it on production Drupal and WordPress sites with 500,000+ unique IP visitors/day and there's definite performance for dynamic non-cache PHP requests years back before I started developing Centmin Mod stack.

    Centmin Mod Nginx is built using jemalloc instead of regular system glibc and from my own tests suffers less from kernel related Meltdown and Spectre mitigation fixes and their performance overhead (overhead was jemalloc 5% vs glibc 15% ) . And supporting GCC 7/8/9 and Clang 6/7/8 gives another 5-30% boost depending on cpu pairing. But yes most folks won't have the traffic to even properly differentiate between apache, litespeed /openlitespeed or nginx's generic default configuration or distribution install methods. My view is skewed as I work with and optimize some of the largest forum (Centmin Mod powers 10% of largest xenforo forums online) and WordPress based web sites on the internet so every optimisation cumulatively adds up and more or less gets poured into how Centmin Mod gets developed and configured for Nginx/PHP-FPM and MariaDB MySQL (and eventually for my litespeed and openlitespeed integration) :smile:

  • jsgjsg Member, Resident Benchmarker

    @eva2000 said:
    Litespeed LSAPI PHP isn't trickery

    Well, in my view it is trickery. First because it tries to squeeze out a bit more performance of a technology (PHP) that itself is a major problem. Plus unlike fastCGI it's not well supported on all unix platforms and not even on all linux distros.

    Centmin Mod Nginx is built using jemalloc instead of regular system glibc ...

    So? There are even faster allocators but that's not the point. Of course one can somehow massage a server to be faster -but- that's not what most users do nor can they do it.
    If you really want, we can go further along that line but you won't win because I have written servers that just blow anything you have out of the water. Unfortunately though those programs are bespoken, made for a single customer, and not available to others - just like highly optimized http + PHP solutions like the one you created.

    My view is skewed as I work with and optimize some of the largest forum (Centmin Mod powers 10% of largest xenforo forums online) and WordPress based web sites on the internet so every optimisation cumulatively adds up and more or less gets poured into how Centmin Mod gets developed and configured for Nginx/PHP-FPM and MariaDB MySQL (and eventually for my litespeed and openlitespeed integration) :smile:

    Your view is also skewed bceause you are bound to a certain stack, and all you do is sqeezing out performance of that stack - but that stack is poor no matter how nicely you massage it. It is poor because it's centered around PHP.

    Again, I respect your work and your benchmark but I wanted to bring up the question of usefulness for the many who have to or want to live with whatever apt-get or yum delivers to them.

  • eva2000eva2000 Veteran
    edited August 2019

    jsg said: I know that. But VPS come in many flavours and usually don't have a dedicated core. And even if they do they still are a poor base for that type of benchmarking. When not comparing VPSs but rather software and in a performance vs performance test, one should have a stable, solid base (read: dedi) and one should properly describe the base (e.g. what kind of disks, controller, relevant processor flags, etc).

    Yes dedicated servers would be better for more accurate benchmarks compared VPS, though I only work with what I have access to :)

    jsg said: Well, in my view it is trickery. First because it tries to squeeze out a bit more performance of a technology (PHP) that itself is a major problem. Plus unlike fastCGI it's not well supported on all unix platforms and not even on all linux distros.

    Isn't all optimisations of any kind classes as 'squeeizng out' more performance within the confines of the framework you have to work with ? Guess you can call it whatever you want. But if such performance improvements aren't important, then we would not see the performance progression from PHP 4 vs PHP 5 vs PHP 7 or MySQL 3.23.x days vs MySQL in all it's current glory. Of course you can say NoSQL databases for some workloads or non-PHP languages are better - but if you have to work within confines of your framework requirements, then utilising the best tools within their class/specific task are required i.e. using PHP for PHP web apps.

    jsg said: If you really want, we can go further along that line but you won't win because I have written servers that just blow anything you have out of the water. Unfortunately though those programs are bespoken, made for a single customer, and not available to others - just like highly optimized http + PHP solutions like the one you created.

    jsg said: Your view is also skewed bceause you are bound to a certain stack, and all you do is sqeezing out performance of that stack - but that stack is poor no matter how nicely you massage it. It is poor because it's centered around PHP.

    Don't disagree with you as I have tested a nodejs based cache web server setup which serves static files at least 2x times faster than Nginx ages back. And hence why Nginx with gzip_static and other tweaks to offload PHP requests to static requests work to close the gap with Litespeed in above http2benchmark coachbloggzip wordpress test scenario and is the whole purpose of my forked http2benchmark additional extended tests to highlight that which was tested against Nginx's own yum/apt repo distributed versions.

    But as I stated if you are running a PHP web app i.e. wordpress, then you use what your confined to i.e. PHP. That is as much a reality as the statement that most folks will use whatever Linux distro provided versions of web servers as opposed to rolling their own customised stack. But I guess you and I aren't most folks. Which is why benchmarks like above are required as most folks won't realise such differences unless they're highlighted and discussed (like we're doing right now) :) So benchmarks are not entirely useless :D If benchmarks weren't done and shared like above, then regular folks will be confined to distro default or non-optimal setups and not learn anything about optimisation/tweaks or 'squeezing' out more performance where possible and such tweaks will be contained to the folks like us only :) With that said would love to see your custom web server performance/benchmark comparisons if you have any - I love reading about such (even if you remove some of the more privy details) as you probably have gathered :)

  • jsgjsg Member, Resident Benchmarker

    @eva2000 said:
    Yes dedicated servers would be better for more accurate benchmarks compared VPS, though I only work with what I have access to :)

    I understand well that one has to work with what one has available but still

    • you certainly have some system available at home
    • when trying to measure anything but quite large differences one absolutely needs a stable base -> a dedi, which can also be any hardware at home capable to carry the load.
      Not respecting that leads to meaningless results.

    Isn't all optimisations of any kind classes as 'squeeizng out' more performance within the confines of the framework you have to work with ? Guess you can call it whatever you want. But if such performance improvements aren't important, then we would not see the performance progression from PHP 4 vs PHP 5 vs PHP 7 or MySQL 3.23.x days vs MySQL in all it's current glory. Of course you can say NoSQL databases for some workloads or non-PHP languages are better - but if you have to work within confines of your framework requirements, then utilising the best tools within their class/specific task are required i.e. using PHP for PHP web apps.

    No. Using, e.g. Hack (a "compiled PHP") is not trickery. Also using PHP 7 instead of PHP 5.x is not trickery. But using a given constellation and some not widely compatible hacks like e.g. squeezing a bit more performance out over fastCGI is trickery in my book.

    Don't disagree with you as I have tested a nodejs based cache web server setup which serves static files at least 2x times faster than Nginx ages back. And hence why Nginx with gzip_static and other tweaks to offload PHP requests to static requests work to close the gap with Litespeed

    I disagree. Both nginx and OLS can use gzipping and both can cache dynamically created content (but OLS's caching is a bit better)

    But as I stated if you are running a PHP web app i.e. wordpress, then you use what your confined to i.e. PHP. That is as much a reality as the statement that most folks will use whatever Linux distro provided versions of web servers as opposed to rolling their own customised stack. But I guess you and I aren't most folks. Which is why benchmarks like above are required as most folks won't realise such differences unless they're highlighted and discussed (like we're doing right now) :) So benchmarks are not entirely useless :D If benchmarks weren't done and shared like above, then regular folks will be confined to distro default or non-optimal setups and not learn anything about optimisation/tweaks or 'squeezing' out more performance where possible and such tweaks will be contained to the folks like us only :)

    I disagree in part because a benchmark without practical relevance is of little use.

    With that said would love to see your custom web server performance/benchmark comparisons if you have any - I love reading about such (even if you remove some of the more privy details) as you probably have gathered :)

    Won't happen because I almost always have to sign NDAs (which is perfectly normal in my field). But it's not at all complicated to understand when one considers what happens once PHP (or Python or even Lua) are out of the game and everything is done by compiled code and using a good basic structure (like worker threads and AIO, depending on the job).
    And even if I would show you code it would be pretty worthless because most of it is not in widely known languages and also has a rather high formal part (e.g. for static analysis, passing to Z3, etc).

  • eva2000eva2000 Veteran
    edited August 2019

    jsg said: I disagree. Both nginx and OLS can use gzipping and both can cache dynamically created content (but OLS's caching is a bit better)

    Yeah that comes down to OpenLiteSpeed/Litespeed also having a user configurable small static file memory mapped cache usually for <4kb files and allocated 20-60MB to such cache for out of the box defaults. Nginx only has file mapped caching. So really depends on size of static file you're testing too.

    jsg said: Won't happen because I almost always have to sign NDAs (which is perfectly normal in my field). But it's not at all complicated to understand when one considers what happens once PHP (or Python or even Lua) are out of the game and everything is done by compiled code and using a good basic structure (like worker threads and AIO, depending on the job).
    And even if I would show you code it would be pretty worthless because most of it is not in widely known languages and also has a rather high formal part (e.g. for static analysis, passing to Z3, etc).

    Yeah totally understand that - nginx with threads pools and I recently added optional patch support to Centmin Mod 123.09beta01's Nginx to use Linux 5.1+ Kernel's io_uring interface for improved buffered AIO and less system calls when using Nginx aio directive https://lwn.net/Articles/776703/. Nginx io_uring patch is disabled by default, so let users enable it themselves via a option variable and test it out themselves if they use Linux 5.1+ with their Centmin Mod setups. I am still doing tests myself :)

    io_uring author's own benchmarks against libaio and user space implementations like spdk https://lore.kernel.org/linux-block/[email protected]/

    Yeah I have to sign NDAs for my clients too so definitely understand, hence why i said 'remove more privy info' even if it's just the results themselves.

  • jsgjsg Member, Resident Benchmarker

    @eva2000 said:
    Yeah that comes down to OpenLiteSpeed/Litespeed also having a user configurable small static file memory mapped cache usually for <4kb files and allocated 20-60MB to such cache for out of the box defaults. Nginx only has file mapped caching. So really depends on size of static file you're testing too.

    Not really. OLS has multiple cache models one of which is memory mapped caching mainly for small files. IIRC it's something like small files in a mem mapped cache and larger ones in a file cache (which can be a ram disk).

    Yeah totally understand that - nginx with threads pools and I recently added optional patch support to Centmin Mod 123.09beta01's Nginx to use Linux 5.1+ Kernel's io_uring interface for improved buffered AIO and less system calls when using Nginx aio directive https://lwn.net/Articles/776703/. Nginx io_uring patch is disabled by default, so let users enable it themselves via a option variable and test it out themselves if they use Linux 5.1+ with their Centmin Mod setups. I am still doing tests myself :)

    Frankly, I'm not so sure about nginx's future since it has been bought by F5. In fact that was one of the reasons that confirmed my desire to look at alternatives. OLS is also owned by a company and copylefted (GPL 3) but in the case of OLS there is a long history so we can reasonably assume that their open source efforts are - and stay - honest. AFAIC I wouldn't put any significant work into nginx for some time until we have a clearer picture about F5's attitude and behaviour.

    Regarding my work I can't tell a lot but I can share one thing that kind of surprised me. I was always based on the assumption that my clients are concerned about security. Well, my experience suggest that a few really are but most of them seem to be mostly about liability ("If our software is provably and verifiably secure we can't be liable"). Somewhat sad, but oh well ...

  • eva2000eva2000 Veteran
    edited August 2019

    jsg said: Not really. OLS has multiple cache models one of which is memory mapped caching mainly for small files. IIRC it's something like small files in a mem mapped cache and larger ones in a file cache (which can be a ram disk).

    yeah true - i just left out file mapped cache for OLS/LS as to highlight where Nginx and OLS/LS differ rather where they are similar with regards to caching.

    jsg said: Frankly, I'm not so sure about nginx's future since it has been bought by F5. In fact that was one of the reasons that confirmed my desire to look at alternatives. OLS is also owned by a company and copylefted (GPL 3) but in the case of OLS there is a long history so we can reasonably assume that their open source efforts are - and stay - honest. AFAIC I wouldn't put any significant work into nginx for some time until we have a clearer picture about F5's attitude and behaviour.

    It's wait and see approach - but it's true Nginx's commercial product focus with Nginx Plus has some folks concerned. But it would suicide for Nginx to mess up open source free version's development. Nginx mainline is due for HTTP/3 QUIC support so will see https://trac.nginx.org/nginx/roadmap. While Litespeed/OpenLiteSpeed already have QUIC support.

    But not too concerned for Centmin Mod as I have been working on integration for other web servers too like Litespeed/OpenLitespeed, caddy, h2o etc. I evaluate and test them all :)

  • FYI, for anyone running these http2benchmark tests can you verify on server side if litespeed is actually running h2load tests with HTTP/2 mode or falling back to HTTP/1.1. Details outlined at https://github.com/http2benchmark/http2benchmark/issues/7

    On server side on CentOS run the following commands where ipaddr is your server's IP address

    yum -y install nghttp2
    /opt/switch.sh lsws
    h2load -t1 -c1 -n10 https://ipaddr/1kstatic.html
    

    and verify if application protocol tested is http/1.1 or h2 (http/2)

    Application protocol: http/1.1
    

    or

    Application protocol: h2
    
  • In meantime I also added ECDSA SSL certificate and SSL cipher config testing for Litespeed and Nginx on CentOS 7 (haven't tested on Ubuntu). Example results and test steps at https://github.com/centminmod/http2benchmark/blob/extended-tests/examples/ecdsa-http2benchmark-h2load-low.md

  • eva2000eva2000 Veteran
    edited August 2019

    Updated my forked http2benchmarks with more ECDSA SSL certificate/cipher tests for h2load HTTP/2 HTTPS comparisons between Litespeed 5.4.1 vs Nginx 1.16.1. at https://github.com/centminmod/http2benchmark/blob/extended-tests/examples/ecdsa-http2benchmark-h2load-low-lsws-5.4.1-nginx-1.16.1-run1.md

    • Tested on CentOS 7.6 64bit KVM VPS using $20/month Upcloud VPS servers and with latest forked version of http2benchmark which collects additional h2load metrics/info in the resulting RESULTS.txt and RESULTS.csv logs for TLS protocol tested (TLSv1.2 etc). SSL Cipher tested (ECDHE-ECDSA-AES128-GCM-SHA256 etc). and Server Temp Key used i.e. ECDH P-256 256bits and TTFB min, avg, max and std dev numbers
    • Also included original http2benchmark added h2load's header compression metric to see how much header compresssion space savings were made. Note Litespeed web server implements the full HPACK encoding compression as per RFC7541 specs so you will see higher percentage of header compression savings compared to distro installed Nginx versions. The reason is that distro package builds of Nginx use Nginx's default partial HPACK encoding configuration - Nginx never implemented the full HPACK encoding spec for their HTTP/2 implementation. However, certain Nginx builds can be patched with full HPACK encoding compression as outlined on Cloudflare's blog https://blog.cloudflare.com/hpack-the-silent-killer-feature-of-http-2/. FYI, Centmin Mod Nginx server has optional support for patched full HPACK encoding compression support as seen here
    Thanked by 1ITLabs
  • nfnnfn Veteran

    @eva2000 can we assume the same results for OLS 1.5?

  • @nfn said:
    @eva2000 can we assume the same results for OLS 1.5?

    haven't tested yet the http2benchmak script does sort of support OLS testing too but haven't tested yet. Probably next on the list and http2benchmark original updated Apache install part too so eventually should be able to test litespeed vs openlitespeed vs nginx vs apache

    Thanked by 1nfn
  • @eva2000 - not sure what's up with upcloud, e.g. for the amdepyc2.jpg.webp upcloud seems to do better for nginx, however, in my test it's quite a lot different. One thing to point out is that the below two VMs are having the CPU flags exposed, I'm not sure whether that's the case with upcloud.

    One thing I believe is worth noting is the CPU usage difference between nginx and litespeed in the benchmark, nginx is well above double the CPU usage so from a scalability point of view, you'll still from a litespeed perspective be able to pull a lot more traffic from the same resources.

    [OK] to archive /opt/Benchmark/081719-150245.tgz
    /opt/Benchmark/081719-150245/RESULTS.txt
    #############  Test Environment  #################
    Network traffic: 45.5 Gbits/sec
    Network latency: 0.049 ms
    Client Server - Memory Size: 2048MB
    Client Server - CPU number: 2
    Client Server - CPU Thread: 1
    Test   Server - Memory Size: 2048MB
    Test   Server - CPU number: 2
    Test   Server - CPU Thread: 1
    #############  Benchmark Result  #################
    
    h2load-low - 1kstatic.html
    lsws 5.4.1      finished in      70.96 seconds,   70470.30 req/s,       8.63 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in     119.92 seconds,   41748.70 req/s,       9.32 MB/s,          0 failures,    35.5% header compression
    
    h2load-low-ecc128 - 1kstatic.html
    lsws 5.4.1      finished in       0.07 seconds,   73436.70 req/s,       8.99 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.12 seconds,   40506.30 req/s,       9.04 MB/s,          0 failures,    35.5% header compression
    
    h2load-low-ecc256 - 1kstatic.html
    lsws 5.4.1      finished in       0.07 seconds,   72933.00 req/s,       8.93 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.12 seconds,   42098.70 req/s,       9.40 MB/s,          0 failures,    35.5% header compression
    
    h2load-low - 1kgzip-static.html
    lsws 5.4.1      finished in      68.36 seconds,   73200.50 req/s,       8.96 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in      88.40 seconds,   56607.70 req/s,      13.66 MB/s,          0 failures,    38.5% header compression
    
    h2load-low-ecc128 - 1kgzip-static.html
    lsws 5.4.1      finished in       0.07 seconds,   71064.30 req/s,       8.70 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.09 seconds,   55476.00 req/s,      13.38 MB/s,          0 failures,    38.5% header compression
    
    h2load-low-ecc256 - 1kgzip-static.html
    lsws 5.4.1      finished in       0.07 seconds,   70946.30 req/s,       8.69 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.09 seconds,   55758.70 req/s,      13.45 MB/s,          0 failures,    38.5% header compression
    
    h2load-low - amdepyc2.jpg.webp
    lsws 5.4.1      finished in      96.52 seconds,   51811.00 req/s,     522.87 MB/s,          0 failures,    95.5% header compression
    nginx 1.16.1    finished in     111.06 seconds,   45024.70 req/s,     458.89 MB/s,          0 failures,    38.6% header compression
    
    h2load-low-ecc128 - amdepyc2.jpg.webp
    lsws 5.4.1      finished in       0.10 seconds,   51432.70 req/s,     519.05 MB/s,          0 failures,    95.5% header compression
    nginx 1.16.1    finished in       0.11 seconds,   44622.00 req/s,     454.79 MB/s,          0 failures,    38.6% header compression
    
    h2load-low-ecc256 - amdepyc2.jpg.webp
    lsws 5.4.1      finished in       0.10 seconds,   49036.00 req/s,     494.86 MB/s,          0 failures,    95.5% header compression
    nginx 1.16.1    finished in       0.12 seconds,   43150.30 req/s,     439.79 MB/s,          0 failures,    38.6% header compression
    
    h2load-low - coachblog
    lsws 5.4.1      finished in      84.78 seconds,   58997.70 req/s,     372.61 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in     704.97 seconds,    7112.43 req/s,      54.13 MB/s,          0 failures,    35.3% header compression
    
    h2load-low-ecc128 - coachblog
    lsws 5.4.1      finished in       0.09 seconds,   57768.30 req/s,     364.84 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.72 seconds,    6958.43 req/s,      52.96 MB/s,          0 failures,    35.3% header compression
    
    h2load-low-ecc256 - coachblog
    lsws 5.4.1      finished in       0.09 seconds,   56746.30 req/s,     358.39 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.72 seconds,    6989.43 req/s,      53.19 MB/s,          0 failures,    35.3% header compression
    
    h2load-low - coachbloggzip
    lsws 5.4.1      finished in      87.47 seconds,   57161.70 req/s,     361.01 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in     105.38 seconds,   47505.00 req/s,     305.94 MB/s,          0 failures,    38.5% header compression
    
    h2load-low-ecc128 - coachbloggzip
    lsws 5.4.1      finished in       0.09 seconds,   57369.30 req/s,     362.33 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.10 seconds,   48554.30 req/s,     312.70 MB/s,          0 failures,    38.5% header compression
    
    h2load-low-ecc256 - coachbloggzip
    lsws 5.4.1      finished in       0.09 seconds,   56119.30 req/s,     354.43 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.11 seconds,   47402.30 req/s,     305.28 MB/s,          0 failures,    38.5% header compression
    
    Thanked by 1eva2000
  • @Zerpy Thanks for sharing your numbers :)

    Zerpy said: not sure what's up with upcloud, e.g. for the amdepyc2.jpg.webp upcloud seems to do better for nginx, however, in my test it's quite a lot different. One thing to point out is that the below two VMs are having the CPU flags exposed, I'm not sure whether that's the case with upcloud.

    yeah once i perfect the http2benchmark forked script on upcloud - using free credits, I'll try other web host providers to see differences i.e. fatter network pipe to test with

    Zerpy said: One thing I believe is worth noting is the CPU usage difference between nginx and litespeed in the benchmark, nginx is well above double the CPU usage so from a scalability point of view, you'll still from a litespeed perspective be able to pull a lot more traffic from the same resources.

    Yeah if nginx tests hit PHP-FPM more cpu usage is incurred compared to Wordpress with LSCache tests. But Nginx with PHP-FPM fastcgi_cache isn't the fastest method for Wordpress full page caching. If you know how to configure nginx/php-fpm to bypass php-fpm for some usage cases i.e. wordpress static html full page cache + precompressed then that paints a different picture too.

    The cpu load average peaks for coachbloggzip test runs illustrate such with lsws 5.4.1 vs nginx 1.16.1 for h2load-ecc256 test below where:

    • lsws peak = 0.37 1 min load average and ends at 0.37 1 min load average
    • nginx peak = 0.43 1 min load average and ends at 0.36 1 min load average

    stats too long to post so in gist at https://gist.github.com/centminmod/71ec6d12e67fcfb437b1f1b57ee685ce

  • @eva2000 said:

    The cpu load average peaks for coachbloggzip test runs illustrate such with lsws 5.4.1 vs nginx 1.16.1 for h2load-ecc256 test below where:

    • lsws peak = 0.37 1 min load average and ends at 0.37 1 min load average
    • nginx peak = 0.43 1 min load average and ends at 0.36 1 min load average

    stats too long to post so in gist at https://gist.github.com/centminmod/71ec6d12e67fcfb437b1f1b57ee685ce

    On the other hand, I don't really feel that "load average" is useful in this case, since the load average is based over the last minute, but every run is literally sub-second, so you'd have to run the test for minutes on end to get the "real" load - checking the actual CPU usage spikes is better in that case.

    e.g. this:

    h2load-low-ecc256 - coachbloggzip
    lsws 5.4.1      finished in       0.09 seconds,   56119.30 req/s,     354.43 MB/s,          0 failures,    95.2% header compression
    nginx 1.16.1    finished in       0.11 seconds,   47402.30 req/s,     305.28 MB/s,          0 failures,    38.5% header compression
    

    No point in measuring a 1-minute average load when we ran the test in 0.11 seconds.

    Looking at top during the runs, you'll see nginx spiking a lot more in CPU per test than LSWS, and that's really the metric to base it on, when we're only testing 5000 requests in total per run.

    Thanked by 1eva2000
Sign In or Register to comment.