Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Stressing a Wordpress installation on Scaleway C1
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Stressing a Wordpress installation on Scaleway C1

I'm running PHP-FPM + Nginx + Oracle MySQL on Scaleway C1, along whith Murmurd with 10 slots, polipo proxy, redmine and redis on the same machine. The average memory consumption is 625MB for two Wordpress installations, one OwnCloud instance, one custom PHP system and nginx as reverse proxy to redmine (unicorn) and the previous cited software.

Today I decided to stress one of my Wordpress installation using the loader.io plataform. The first test will take a total number of visitors and divide by the test's lenght. I've tried from 1k to 10k, in 1k steps, and achieved 10k users in 1 minute with 1.4% err. rate when running a 10k users per minute test.

When I'm running a users per second test, the plataform will increase per second the number of visitors that I've provided. Using a 15s time span, I can grow up to 150 new hits per second (150, 300, etc) with a err. rate of 0.3%:

Finally, the maintain client load. This test is a bit different, as it try to be more real when simulate that users keep doing requests after receiving the first response (as if the user kept navigating on your site). The two first tests just simulate hits, what is more adequated to test APIs. On this test, I can have up to 250 simultaneous users making a new hit every second with a error rate of 1.4% which is pretty impressive.

These metrics are very interdependent. I can't, for example, start and maintain a peak of 250 users navigating without having at least 50% of error rate as the limit of increase on users is of 150 new users per second.

What do you think guys?

P.S.: It's 03:51 here on Brazil. Please, correct my typos and sorry me if my text is confusing.

Thanked by 2pbgben chinmoy
These results are good?
  1. These results are good?19 votes
    1. Yes
      78.95%
    2. No
      21.05%

Comments

  • sinsin Member
    edited October 2015

    On a single 3.6GHz core VULTR instance using the maintain load loader.io test I got
    Successful responses: 418954 in 1 minute and an average response time of 15ms. That's using Nginx and with Supercache set to the mod_rewrite option (using wordpress.orgs default nginx rules). Load hits between 0.20 and 0.30 - use cache and preload static pages so you can let Nginx do all the work instead of PHP.

    and here's a $5.97 2core/2gb Leaseweb VPS on another Wordpress site:

  • Good results, not really much to adjust here apart from caching and better CPU. 250 visitors a second is a lot of traffic!

  • afterSt0rmafterSt0rm Member
    edited October 2015

    @sin said:
    On a single 3.6GHz core VULTR instance using the maintain load loader.io test I got
    Successful responses: 418954 in 1 minute and an average response time of 15ms. That's using Nginx and with Supercache set to the mod_rewrite option (using wordpress.orgs default nginx rules). Load hits between 0.20 and 0.30 - use cache and preload static pages so you can let Nginx do all the work instead of PHP.

    and here's a $5.97 2core/2gb Leaseweb VPS on another Wordpress site:

    I do have caching here: database cache to APC, page cache to disk, static content on CDN, object cache to REDIS and FastCGI caching, all along with cache preload. Nginx isn't that optimized to say the truth nor is PHP-FPM.

    Vultr and Leaseweb cost almost double the price of Scaleway and have shared resources. If I do a load balacing with two Scaleway server, I can easily double the capacity going to 500 visitors a second considering that I'm running other services on these machines. It means 21M and 600K visitors a day, 648M for one single C1 machine, and at least 43M and 200K daily and 1B and 296M monthly for two.

    Considering that loader.io servers are located on US, I don't think 384ms is a bad time for a server on the other side of the atlantic. The vast majority of hits are coming from the IPv6 address that is a HE tunnel, so more latency added.

    I think I still go with Scaleway, even if we have lower raw power. It's actually cheap to build redundance and I prefer to scale horizontally having more small servers than a unique big server.

  • rm_rm_ IPv6 Advocate, Veteran

    Keep in mind that on Scaleway you should be able to launch an instance without a public IP, and the team confirmed earlier it'll be cheaper at 2 EUR/mo only (perfect for a backend database server).

  • @rm_ said:
    Keep in mind that on Scaleway you should be able to launch an instance without a public IP, and the team confirmed earlier it'll be cheaper at 2 EUR/mo only (perfect for a backend database server).

    Yeah, I've forgot this detail.

  • sinsin Member

    EkaatyLinux said: I think I still go with Scaleway, even if we have lower raw power. It's actually cheap to build redundance and I prefer to scale horizontally having more small servers than a unique big server.

    For sure, I think your results with Scaleway are great and I can't wait to try them out myself...dedicated resources for 2.99 euro is awesome. I actually did like your benchmarks and it was cool to see others posting loader.io tests :).

Sign In or Register to comment.