Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How can I truly test a performance comparison between two web servers?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How can I truly test a performance comparison between two web servers?

I have two dedicated web servers. After my first server has been struggling under load, it is due time for a second server. I purchased a more powerful server for my second server.

Here are the specs:

Server 1:

Processors:

Dual L5520 Xeon (quad core, 8 cores total, 16 threads)

2.26 GHz, 2.48 GHz max turbo

8 MB SmartCache

5.86 GT/s QPI Bus Speed

RAM:

24GB RAM DDR3

Storage:

1TB 5400 RPM Hard Drive

Server 2:

Processors:

Dual E5-2660 v2 (10-core, 20 cores total, 40 threads)

2.20 GHz, 3.00 GHz max turbo

25 MB SmartCache
8 GT/s QPI

RAM:

32GB RAM DDR3

Storage:

240GB SSD

Moreover, as can be seen from CPU PassMark, the E5-2660 v2 20-core is nearly 3 times more powerful than the L5520. That makes sense, considering that 20 cores is nearly 3 times more than 8 cores.

PassMark - CPU Mark

Multiple CPU Systems - Updated 9th of August 2017

SOURCE: http://cpubenchmark.net/multi_cpu.html

I tested both servers with the same website using a couple comparisons sites including dareboost and another one I found on google. I also used pingdom although it was unreliable as it changed significantly with every repeat test.

I tested the exact same website which was cloned from one server onto the other, so I was able to view the same exact website on both servers (on server 2 I hosted it as a subdomain).

However, page speed showed only about 3% to 5% improvement with the new server. However, new server is much more powerful as can be seen from the specs. So, page loading speed cannot be an accurate predictor of server power.

For example, it does not take into account how a server would handle heavy load.

Here is the output of the page load speed comparison of the front page of a basic wordpress page with 10 posts on the home page. On the left is the new server. As you see, negligible difference.


For reference, I run a number of wordpress websites, one of which gets around 500,000 views per month. Cumulatively it would probably be millions, I have not counted. I have maybe 20 of my websites running on this server.

I run another website which is not wordpress but gets more views, and has users regularly uploading and downloading files throughout the day every day. It is also on this server.

My first server can handle fine during most times, but during peak load will struggle, and lately the sql usage is very high (hundreds of users on simultaneously uploading/downloading files) and have had to reset sql database and apache just so server would load. Obviously, I have met my limit for server #1. So I plunged into the second dedicated server.


So what I would really like to know is how to really measure how well my new server will be able to handle traffic and how to test to to see how it performs compared to my first server.

What are some ways to test how much better the new server is compared to the old server?

«1

Comments

    • You need to figure out what you're testing criteria is and what you're aiming to improve first. Latency (page load speed) vs throughput/scalability.
    • Then break down the factors that contribute and make up each criteria. For example, page load speed is only partly server backend as frontend web framework/layout and design matter too.
    • Switching to a more powerful server isn't a magic bullet and won't help if you do not measure, monitor and address both latency and scalability factors.
    • For page load speed side, webpagetest.org and gtmetrix.com are what I use. Webpagetest.org is highly recommended for evaluating your progress in page speed optimisations i.e. https://community.centminmod.com/threads/community-centminmod-com-webpagetest-org-log.11982/ :)

    lnx1001 said: So what I would really like to know is how to really measure how well my new server will be able to handle traffic and how to test to to see how it performs compared to my first server.

    Sounds like throughput/scalability is what you're looking at not page load speed so need to load test your servers using load testing tools like apachebench, siege, wrk and locust.io for non-HTTP/2 and nghttp2's h2load for HTTP/2 based HTTPS testing. There's also 3rd party online load testing services which have free limited trials but generally cost money for serious load testing like loader.io, loadimpact.com.

    There was blitz.io but they're closing down so I am looking at self-hosted distributed load testing with http://locust.io/ so need a few dedicated servers for such. Docs http://docs.locust.io/

    Working of locust.io docker image https://github.com/centminmod/docker-ubuntu-locust :)

    So get reading :)

    Thanked by 1ElliotJ
  • lnx1001lnx1001 Member
    edited August 2017

    On my two most important websites, I have done extensive work on page speed optimization. However, I can't test them yet because I can't easily clone that to the new server.

    However, page load speed is important. I am finding that wordpress logins are very slow for my main site. It could also be a site issue. But I would like a server which can handle a heavier wordpress. Partly why I put an SSD on the new server.

    I tried some free load testing sites in the past, but the data returned was useless to me. I don't know how to interpret the data. I would like to know where my bottlenecks are (CPU? RAM? Bandwidth? IO? SQL? PHP? etc) but I have asked the questions on the internet many times over the years and no one has given me an answer. Although, lowendtalk seems to be a very good forum, I regret I did not come here long ago.

    Your answer is a very good starting point. I hope some other people can give me some more advice.

    By the way, I just got locust.io finished installing. I am running it. I am also installing on the other server. However, I am not sure how to interpret this data.

    Edit: I could not install it on server 1, centos 6 using python 2.6.6, can't upgrade python otherwise it can break yum, not worth risk on production server. So can't use locust.io on server 1 but i could use on server 2 (centos 7 on server 2).

  • imho solving web-performance problem should start with careful analysis of current situation. Without knowing where the bottleneck is it does not make much sense to buy more powerful server. 3x faster cpu does not make your web handle 3x more connections in 1/3 time...

    I do not know WP much, but server like your "1" with some tweaking and optimisation should handle millions of views per day easily.

  • WSCallumWSCallum Member
    edited August 2017

    As said by Jarry a server which is 3x more powerful isn't going to make the website 3x faster, it does however mean that you can have the support for handling more concurrent connections IF you have the correct software configurations in place to handle such, ultimately performance doesn't just come down to your hardware configurations but your software configurations play a huge role as well.

    Let's say on your old server you notice a slow down (where your website now takes around 2 seconds to load) at say 500 concurrent connections or so, purely as an example. Your new server could support more concurrent connections, and potentially you might not see the same slow down until around 1500 concurrent connections. Again this is all as an example and not accurate, it comes back down to your software configurations as well. There are other factors too, such as your hard drive configuration, SSDs are obviously faster than HDDs, port speeds, location of the server compared to the visitor/testing server etc.

    Thanked by 1lnx1001
  • vovlervovler Member
    edited August 2017

    pffff.. that won't even be able to handle 10 req/day
    Get this one boy and maybe you'll be able to get 100 req/day

    On a serious note, getting better hardware usually doesn't translate into big improvements in load times, especially if you are the only one visiting your website. It will increase the number of req/s that you server can handle.

    Hardware:
    Page loadtime depends on disk speed (if you are not serving a cached page from ram), CPU speed, network speed, distance from the server, the network path from the user to your server, and if that path is or not congested.

    Software:
    Since you are using apache, I suggest you to try nginx in a test bench and see how it performs. One quick way to do this is to install centminmod beta, php7.0, opcache and memcached.

    Thanked by 1Aidan
  • eva2000 said: There was blitz.io but they're closing down so I am looking at self-hosted distributed load testing with http://locust.io/ so need a few dedicated servers for such. Docs http://docs.locust.io/

    Oooooooh. Now that is nice. Cheers for that!

  • Break it down one step at a time. For instance, the WP login page calls some PHP function on your server. Module test that code - bypass nginx/apache by writing some code that directly calls that PHP function with the right parameters (to profile successful auth attempts).

    Use a tracing framework to find out where it takes it's time. I don't know much about PHP, but a Google search leads to https://xdebug.org/docs/profiler and http://www.semanticdesigns.com/Products/Profilers/PHPProfiler.html

    If PHP tracing is cumbersome, try perf instead: https://blog.heapanalytics.com/basic-performance-analysis-saved-us-millions/
    Using perf effectively needs some in-depth knowledge though.

    Trace through the code (using ltrace/strace if needed) to find out what SQL calls are sent to the database. Profile those queries: https://www.digitalocean.com/community/tutorials/how-to-use-mysql-query-profiling and https://www.sitepoint.com/using-explain-to-write-better-mysql-queries/

    Profiling is best done under stress-testing - either by ApacheBench with high concurrency, or just calling your login PHP code directly in a tight loop.

    I also use tracing tools like uftrace and heaptrack (heap/memory profiling) but they make not make much sense in the context of PHP.

  • lnx1001lnx1001 Member
    edited August 2017

    I do not know WP much, but server like your "1" with some tweaking and optimisation should handle millions of views per day easily.

    Yeah, sure for WP millions, but NOT for millions of people downloading 1GB files!! 8O That website is the main reason for needing the second server. In fact it may be smart to put that alone on the new server and keep all the WP and other sites on server 1, and maximize server 1 optimization like fixing WP bugs, removing bad plugins, installing NGINX, etc.

    However, for my site which has heavy simultaneous downloads, there is no amount of optimization possible because the problem is not optimization but raw server power. Downloads when you have hundreds or even thousands of people downloading simultaneously have reached the limit of server 1 slowing down everything.

    Moreover it is also a bandwidth issue as they are all trying to download through a 100MB pipe. And $100/mo for gigabit is not worth the cost when I can get a whole new server for the same price. Which is what I did.

    Yes, $100/mo is very high for gigabit. There are low end providers who include 33TB of gigabit free per month by overselling. However, where I have these two servers is a very good datacenter with a direct connection into one of the biggest hubs in the U.S. This is important for availability, DNS reliability, latency, and a number of other reasons.

    And the support and quality is significantly better than the budget servers. I know because I have gotten about 100 budget servers over the years for various projects. Customer service sucks, and the general quality of the servers is lower, less features (like no IPMI for example), etc. And they will terminate your server immediately if you miss a day on payment, which isn't really a problem but a huge annoyance and could be bad, inability to use my own hardware, and so many more reasons. In fact after all this, the budget server costs a lot more for the same server after all costs involved with it, and with still crappy support.

    So even though I get lower bandwidth for the same price at this datacenter, it is still worth it because of everything else. And eventually, when the cost for more servers outweighs the cost for a gigabit upgrade, sure I will upgrade at $100/mo extra for a full gigabit connection (unlimited).

    However you are likely right about server 1 being fine for handling many WP installations with millions of views per month. Just not when combined with a site using heavy simultaneous downloads. Also, not for many many millions of visitors per month which I need to prepare for in the long term.

  • lnx1001lnx1001 Member
    edited August 2017

    @WSCallum said:
    As said by Jarry a server which is 3x more powerful isn't going to make the website 3x faster, it does however mean that you can have the support for handling more concurrent connections IF you have the correct software configurations in place to handle such, ultimately performance doesn't just come down to your hardware configurations but your software configurations play a huge role as well.

    Let's say on your old server you notice a slow down (where your website now takes around 2 seconds to load) at say 500 concurrent connections or so, purely as an example. Your new server could support more concurrent connections, and potentially you might not see the same slow down until around 1500 concurrent connections. Again this is all as an example and not accurate, it comes back down to your software configurations as well. There are other factors too, such as your hard drive configuration, SSDs are obviously faster than HDDs, port speeds, location of the server compared to the visitor/testing server etc.

    This is a very helpful response, thank you. This helps me understand some things better. I am looking forward to more replies like this. Concurrent connections definitely is an issue that can be helped with the more powerful server then.

  • lnx1001 said: However, for my site which has heavy simultaneous downloads, there is no amount of optimization possible because the problem is not optimization but raw server power. Downloads when you have hundreds or even thousands of people downloading simultaneously have reached the limit of server 1 slowing down everything.

    rate limiting ;)

  • There is no way around properly analyzing your situation, needs, hard- and software.
    Plus, there is the holy rule of server performance: it must be balanced, i.e. optimizing one point doesn't bring you much. The whole process must be balances. This, btw. often means that the solution is to make some part that is a bottleneck less lousy.

    As for your "many clients downloading large files": that quite probably simply boils down to bandwidth. The more you have the better and more performant it will run. Even on a small system; a simple 2 or 4 core server can easily saturate 4 Gb pipes. With good software and some memory it will even easily saturate a 10 Gb pipe.

    Which brings us to the first recommendation: switch the other way round, i.e. put all the websites on the new server and use the older one as the download box. And add as much bandwidth as you can and want to afford.

    For the web servers it's much more complicated as there are many factors to look at. But if you want s simplified cooked down version: Add memory. Often more memory is more important and helpful than more cores.

  • Has anyone tried this before. I have thought about spinning up a small test just to see if it works.

    https://www.paessler.com/tools/webstress

    Probably best used on dedis. :^)

  • Just a thought about the "many users downloading 1GB files" -- are these generally the same 1GB files? Would a CDN help you out for that bit?

  • @seanho said:
    Just a thought about the "many users downloading 1GB files" -- are these generally the same 1GB files? Would a CDN help you out for that bit?

    No, they are always different and new files. Anywhere from 10mb to 10gb files.

  • lnx1001lnx1001 Member
    edited August 2017

    @Mark_O_Polo said:
    Has anyone tried this before. I have thought about spinning up a small test just to see if it works.

    https://www.paessler.com/tools/webstress

    Probably best used on dedis. :^)

    Looks like a nice free tool, only problem is it doesn't look like this tool has any way to test my server.

    What I need to know if where the bottlenecks are. Obviously if you simulate a million users it will slow down the server, what I need to know is what is causing the slow down, is it CPU? RAM? network? etc

    As a result, without some other software I have no use for this software presently since all it does is add load to your server, does not test it. Last thing I want is to only add load to my server.

  • lnx1001 said: what I need to know is what is causing the slow down, is it CPU? RAM? network? etc

    ... Then run live monitoring software & monitor it whilst stress-testing?

    You're not gonna find any software that'll magically tell you "Oops, you need more RAM" :/

    Thanked by 1MasonR
  • Waterfall both servers and be done with it.

  • .> @AuroraZ said:

    Waterfall both servers and be done with it.

    If I put the servers into a waterfall they are likely to short out and then how am I supposed to run my websites? Should I use 100% alcohol for the waterfall instead of water in order to prevent shorts?

  • lnx1001lnx1001 Member
    edited August 2017

    @Aidan said:

    lnx1001 said: what I need to know is what is causing the slow down, is it CPU? RAM? network? etc

    run live monitoring software & monitor it whilst stress-testing

    Did you have software in mind for live server monitor to tell me what resources are bottlenecked?

  • @lnx1001 said:
    .> @AuroraZ said:

    Waterfall both servers and be done with it.

    If I put the servers into a waterfall they are likely to short out and then how am I supposed to run my websites? Should I use 100% alcohol for the waterfall instead of water in order to prevent shorts?

    If you have no clue what a waterfall for a server is, then you may have some problems. Any test you run on it other then this is not going to give you the results you want.

  • Right, I am not interested in a complicated chart which is just a waste of brain energy, all I need to know is just what the bottleneck is so I can upgrade that easily.

  • @lnx1001 said:
    Right, I am not interested in a complicated chart which is just a waste of brain energy, all I need to know is just what the bottleneck is so I can upgrade that easily.

    It could help a lot if you started by describing your situation: os, web-server software, its config, wp-modules installed, db, php, caching, accelerator, etc.

    btw, for milions of users downloading 1gb files you do not need web-server, but some file sharing service...

  • I don't know of the cure all you seek, one which will state "add 12 GB Ram, use MariaDB, and minimum bandwidth of 4Gb/s".

    Unfortunately, I think the solution is going to require some amount of brain energy. You might consider hiring someone with web performance/server optimization skills. I'm pretty sure they are going to use something akin to what we have suggested. Ramp and monitor. Something gonna give first... And, then you tackle whether it's hardware or code related.

    If you are hesitant to load your production server, rent another one for performance testing

    Last, if you find an "all in one" solution, please share details. Thanks.

  • @Mark_O_Polo said:
    I don't know of the cure all you seek, one which will state "add 12 GB Ram, use MariaDB, and minimum bandwidth of 4Gb/s".

    Unfortunately, I think the solution is going to require some amount of brain energy. You might consider hiring someone with web performance/server optimization skills. I'm pretty sure they are going to use something akin to what we have suggested. Ramp and monitor. Something gonna give first... And, then you tackle whether it's hardware or code related.

    If you are hesitant to load your production server, rent another one for performance testing

    Last, if you find an "all in one" solution, please share details. Thanks.

    He will not find the all one solution he sought. Anything worth doing requires some knowledge and some effort.

  • IamIam Member

    lnx1001 said: My first server can handle fine during most times, but during peak load will struggle, and lately the sql usage is very high (hundreds of users on simultaneously uploading/downloading files) and have had to reset sql database and apache just so server would load. Obviously, I have met my limit for server #1. So I plunged into the second dedicated server.

    I have a site with around 30K user online at peak time with millions pageviews/day (taken from Google Analytics). For OS I use CentOS 7.

    First I change the sysctl.conf to resolve the "possible SYN flooding....":

    # enable syncookies
    net.ipv4.tcp_syncookies=1
    
    # default=5
    net.ipv4.tcp_syn_retries = 3
    
    # default=5
    net.ipv4.tcp_synack_retries = 3
    
    # default=1024
    net.ipv4.tcp_max_syn_backlog = 65536
    
    # default=124928
    net.core.wmem_max = 8388608
    
    # default=131071
    net.core.rmem_max = 8388608
    
    # default = 128
    net.core.somaxconn = 512
    
    # default = 20480
    net.core.optmem_max = 81920

    Increase the mariadb file limits:

    mkdir -p /etc/systemd/system/mariadb.service.d/
    
    vi /etc/systemd/system/mariadb.service.d/limits.conf
    
    # paste and save
    [Service]
    LimitNOFILE=65535
    
    # reload the daemon
    systemctl daemon-reload
    
    # restart it
    service mariadb restart

    Creating custom.conf to override apache default config, lowering the timeout value and disable the keepalive:

    vi /etc/httpd/conf.d/custom.conf
    
    # paste and save
    Timeout 5
    KeepAlive Off
    
    # reload
    service httpd reload

    My my.cnf value:

    [mysqld]
    secure-file-priv = /var/tmp
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    # Disabling symbolic-links is recommended to prevent assorted security risks
    symbolic-links=0
    # Settings user and group are ignored when systemd is used.
    # If you need to run mysqld under a different user or group,
    # customize your systemd unit file for mariadb according to the
    # instructions in http://fedoraproject.org/wiki/Systemd
    
    skip-name-resolve
    interactive_timeout            = 120
    wait_timeout                   = 120
    
    # MyISAM #
    key-buffer-size                = 128M
    myisam-recover                 = FORCE,BACKUP
    
    # SAFETY #
    max-allowed-packet             = 128M
    max-connect-errors             = 1000000
    
    # CACHES AND LIMITS #
    tmp-table-size                 = 256M
    max-heap-table-size            = 256M
    query-cache-type               = 0
    query-cache-size               = 0
    max-connections                = 10240
    thread-cache-size              = 50
    open-files-limit               = 65535
    table-definition-cache         = 2048
    table-open-cache               = 4096
    
    # INNODB #
    innodb-flush-method            = O_DIRECT
    innodb-log-files-in-group      = 2
    innodb-log-file-size           = 512M
    innodb-flush-log-at-trx-commit = 1
    innodb-file-per-table          = 1
    innodb-buffer-pool-size        = 2048M
    
    [mysqld_safe]
    log-error=/var/log/mariadb/mariadb.log
    pid-file=/var/run/mariadb/mariadb.pid
    
    #
    # include all files from the config directory
    #
    !includedir /etc/my.cnf.d
    
  • vfusevfuse Member, Host Rep

    Tideways is a nice app to monitor your PHP performance (much cheaper than new relic). I'm pretty sure you'll find the bottleneck since they have very detailed reporting.

  • WSSWSS Member

    Well someone got unbanned. OK. How about a little transparency?

  • I tried some of these but am no closer to knowing even one thing that is helpful about any of my servers.

Sign In or Register to comment.