Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How to setup a VPS as web server
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How to setup a VPS as web server

M66BM66B Veteran
edited March 2012 in Tutorials

For my own record and maybe for your convenience I wrote down the steps to install a Debian/Ubuntu VPS as web server. This setup is optimized for low memory usage (256-512 MB). This means nginx instead of Apache, no DNS and e-mail server (only outgoing mail) and some MySQL, PHP and nginx tuning.

This setup guide has been tested for a VPS on OpenVZ, Xen and KVM and for Debian 6.0 (Squeeze), Ubuntu 10.04 (Lucid Lynx) and 11.10 (Oneiric Ocelot). My favorite combination so far is KVM and Debian 6.0 with the Dotdeb repository (fast virtualization, stable Linux and the latest server software).

See here for the guide: http://blog.bokhorst.biz/6507/computers-and-internet/how-to-setup-a-vps-as-web-server/

Thanked by 2djvdorp inverse
«1

Comments

  • Nice share :) It would be cool to get this packaged up as a OpenVZ template, for quick and easy deployment.

  • sleddogsleddog Member
    edited March 2012

    For PHP 5.3, try the new 'ondemand' process manager, e.g.

    pm = ondemand
    pm.max_children = 5
    pm.process_idle_timeout = 3s

    "apt-get install php-apc" will install an older version of APC. Instead, use "apt-get install php5-apc".

    If you raise PHP's upload_max_filesize don't forget to also configure nginx, e.g.:

    client_max_body_size 4M;

    Thanked by 2M66B yomero
  • DerekDerek Member

    @sleddog said: "apt-get install php-apc" will install an older version of APC. Instead, use "apt-get install php5-apc".

    Or you can pecl install apc, then place "entension apc.so" into your php.ini alone with your settings.

    See http://derekharget.com/2012/02/80/centos-install-apc-3-1-9-the-easy-way

  • nice share man ;)

  • MaouniqueMaounique Host Rep, Veteran

    How many concurent users could that handle ? Some normal PHPBB install, everyone browsing and mostly reading ?
    I am interested because I set up similar boxes with apache2 and I would be interested how much is the difference if nginx is used. Just your guess, no need to actually test that.
    Please :)
    M

  • sleddogsleddog Member
    edited March 2012

    "Concurrent users" is kind of a vague concept. Nginx+PHP-fpm will handle pm.max_children simultaneous PHP connections. e.g., pm.max_children=10 might support 50 "concurrent users" if users spend on average 5 times as much time viewing content as they do loading pages :)

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2012

    That is not what i meant, of course, I can setup whatever, thing is, it will crash sooner or later, or becomes unresponsive taking extremely long time to load.
    With 64 MB and 64 burst, ovz no swap, apache2, php, mysql and phpbb3 I saw about 40 users before it starts throwing fits (was a low traffic site for a clan but they had a drama and the site crashed)
    I can set very low spares and high max, that will run happy for a while, but will fail badly when load happens.
    Of course, we can simulate stuff, but nothing like the real thing and first hand experience is invaluable.
    I would like to know if the difference is worth changing towards an app with dubious licensing and uncertain future (in my view, of course). I am apache and lighttpd fan but i dont think i can ignore nginx for too long.
    M

  • Well, I'd look to see how much memory a phpbb process uses, and then calculate a safe maximum based on the overall memory availability. When pm.max_children is reached, additional requests are queued until a php process is available to handle it. So, during peak times users might experience a delay or, at worst, an nginx timeout (connecting to php). No reason to crash the box.

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2012

    Well, crash means when the user has too long timeout or error page, not that the box is really throwing a segfault or memory allocation error. When site is unusable for long enough to make ppl give up.
    M

  • @sleddog said: For PHP 5.3, try the new 'ondemand' process manager, e.g.

    Won't that make it slower since you have to make a new process if no one has been using the site a while?

    Also, do you really need that many PHP-FPM workers? I did a benchmark a bit ago that I did not post about, but I do not see any difference on a single core VPS anywhere from 1, 2, 4, 5, or 6 workerse.

  • sleddogsleddog Member
    edited March 2012

    Ah, that's a different definition of "crash" :) With only 64MB of memory there isn't a lot of room when you're running mysql and a heavy PHP app. I'd wager that nging+php-fpm would support more users than apache 2.2 + mod_php, though given enough drama you're gonna have the same issue. :)

  • MaouniqueMaounique Host Rep, Veteran

    Of course, but I am interested how much that "more" would be. If 45 instead of 40, I wont look into it, if 50, tho, it might be worth it for some very speciffic cases.
    M

  • @dancom96 said: Won't that make it slower since you have to make a new process if no one has been using the site a while?

    Spinning up a new process takes CPU. If you're running on a 386SX then yes it would make things slower :) But CPU is rarely the bottleneck for any decent VPS so the affect would be negligible. If you have several php-fpm processes sitting around doing nothing for an extended, there's a chance that their memory gets swapped out, which would create issues....

    Also, do you really need that many PHP-FPM workers? I did a benchmark a bit ago that I did not post about, but I do not see any difference on a single core VPS anywhere from 1, 2, 4, 5, or 6 workerse.

    Are you referring to the "pm.max_children = 5" example I posted above? It's the maximum number of simultaneous php-fpm processes to allow, not the number to always run.

  • @sleddog said: Are you referring to the "pm.max_children = 5" example I posted above? It's the maximum number of simultaneous php-fpm processes to allow, not the number to always run.

    Yeah I know it's the maximum, but what I meant is, when you have enough users to make PHP-FPM spawn that many processes, are they even needed if you can serve the same number of pages per second with 1 process, instead of 5?

  • @dancom96 said: Yeah I know it's the maximum, but what I meant is, when you have enough users to make PHP-FPM spawn that many processes, are they even needed if you can serve the same number of pages per second with 1 process, instead of 5?

    It's probably not "needed", but it's most likely more efficient/faster. Do some testing with ab.

  • dancom96dancom96 Member
    edited March 2012

    @Kairus said: It's probably not "needed", but it's most likely more efficient/faster. Do some testing with ab.

    That is exactly what I did a couple of weeks ago, the same number of pages /sec, tested a WP site (caching disabled obviously) using AB with a concurrency of 5, and again with a concurrency of 10.

  • sleddogsleddog Member
    edited March 2012

    @dancom96 said: Yeah I know it's the maximum, but what I meant is, when you have enough users to make PHP-FPM spawn that many processes, are they even needed if you can serve the same number of pages per second with 1 process, instead of 5?

    You have to answer that question yourself :)

    In /etc/php5/fpm/pool.d/www.conf configure php-fpm status. Then you'll be able to see stats like this in your browser:

    pool:                 www
    process manager:      ondemand
    start time:           10/Mar/2012:09:28:59 -0330
    start since:          892453
    accepted conn:        43071
    listen queue:         0
    max listen queue:     1
    listen queue len:     128
    idle processes:       0
    active processes:     1
    total processes:      1
    max active processes: 9
    max children reached: 0
    

    See the last two lines. This server at one point was running 9 php-fpm processes. The last line shows that it's never hit the configured maximum (not shown, but it's set to 15.)

    If you set your max to "2" (for example) and then see "max children reached" steadily incrementing over time, then it's a hint you should raise your "max" -- within your memory limitations of course.

  • NickMNickM Member

    @Maounique I'm running a Simple Machines Forum on a VPS with 512MB of RAM, and it handles 200 concurrent users just fine. I use 4 nginx workers and php-fpm with ondemand process management (up to 20 workers, typically only 5 or 6 are active at any given time). Also, I'm running another fairly busy site from the same PHP pool. It could probably handle another 200 users and not run into a problem.

    At 64MB, your biggest problem is likely MySQL. In my experience, with mysql, you can reduce memory usage, sure, but you're sacrificing performance when you do.

    Thanked by 2Maounique yomero
  • M66BM66B Veteran

    Thanks for bringing the typo in php-apc to my attention and suggesting the ondemand process manager! The setup guide has been updated. Any more remarks?

  • thanks m66b. nice tutorial

  • joepie91joepie91 Member, Patron Provider

    If you expect traffic peaks of some kind, use the on-demand PHP setting (which is, if I understand correctly, similar to the way lighttpd does PHP), or your visitors will get HTTP 50x errors when you get slashdotted/reddit'd/somehow peaked. If your traffic is constant (a community forum without anything particularly remarkable), you should probably be fine with pre-configured workers and it will make your pages load slightly faster.

  • @joepie91 said: If you expect traffic peaks of some kind, use the on-demand PHP setting

    Make sure its well defended against hackers and People who DDoS too!.

  • M66BM66B Veteran

    @joepie91: I am going to test the on demand process manager. I have never used it before.

    @DanielM: do you have any suggestions that should be added to the tutorial?

  • @DanielM said: Make sure its well defended against hackers and People who DDoS too!.

    This reduces the impact slowloris/other http dos tools that use lots of connections can have:

    iptables -A INPUT -p tcp --syn --dport 443 -m connlimit --connlimit-above 50 -j DROP
    iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 50 -j DROP
    service iptables save

    Change the connection limits as needed.

    Thanked by 1M66B
  • netomxnetomx Moderator, Veteran

    @dmmcintyre3 said: iptables -A INPUT -p tcp --syn --dport 443 -m connlimit --connlimit-above 50 -j DROP

    iptables -A INPUT -p tcp --syn --dport 80 -m connlimit --connlimit-above 50 -j DROP
    service iptables save

    what is above 50... seconds? milliseconds?

  • dmmcintyre3dmmcintyre3 Member
    edited April 2012

    50 connections (per IP)

  • M66BM66B Veteran

    I did choose for this approach (also in the tutorial):

    iptables -I INPUT 1 -i eth0 -p tcp --syn --dport 80 -m recent --rcheck --seconds 10 --hitcount 20 --name http -j LOG --log-prefix "Rate limit: " iptables -I INPUT 2 -i eth0 -p tcp --syn --dport 80 -m recent --update --seconds 10 --hitcount 20 --name http -j DROP iptables -I INPUT 3 -i eth0 -p tcp --syn --dport 80 -m recent --set --name http

    What are the advantages/disadvantages of both solutions?

  • It looks like it would take 10 seconds for your rules to notice an attack.

  • M66BM66B Veteran

    That is true. It is a balance between blocking malicious and legitimate traffic.

  • My rules would let up to 50 connections be opened without problems, then the 51st connection would not be accepted.

Sign In or Register to comment.