Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Optimising nginx for static file serving
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Optimising nginx for static file serving

zserozsero Member

I've set up a nginx server only for static file serving (like S3).
My problem is that even when load-testing for the tiniest possible file on the server (32 bites total!), I start getting loads of TCP errors from only 600 concurrent users. The bandwidth still stays in the 100-200 KB/s region, thus it is clearly not be the problem.

In Blitz.io this are the results I get:
17 TCP Connection reset 544
23 TCP Connection timeout 6

I'd like to push this to 5000+ users, not 600.

Can you tell me what should I change in the default nginx configuration to make it possible to handle this many users?

Comments

  • CSharpCSharp Member
    edited June 2014

    Check the nginx error logs to see if there's more information of what caused the TCP connections reset and get back to us.

  • ricardoricardo Member
    edited June 2014

    There's probably something useful for you here: http://engineering.chartbeat.com/2014/01/02/part-1-lessons-learned-tuning-tcp-and-nginx-in-ec2/

    net.core.somaxconn in particular, there's a limit to the number of connections queued by the OS before they're passed onto the webserver. The default is usually around ~128. When you exceed that amount, the client TCP connection will just get dropped rather than being served by the webserver.

  • wojonswojons Member

    Nginx Cache Settings http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache

    Make sure to test epoll in nginx http://nginx.org/en/docs/events.html

    Set a nice high limit for nginx worker_processes, multi_accapt on and worker_connections.
    http://nginx.org/en/docs/ngx_core_module.html

  • zserozsero Member

    Thanks for all this, I'm looking into them. There is nothing in the logs, BTW.

  • rostinrostin Member

    Caching static files may help:

    location ~*  \.(jpg|jpeg|png|gif|ico|css|js)$ {
       expires 30d;
    }
    
  • zserozsero Member
    edited June 2014

    It wouldn't help with load testing services. Also, the files are going to be changing quite frequently, it'll be a http server for a linux repo.

  • MunMun Member

    How much RAM on the server?

    How many CPU cores?

    What is worker_processes x; set to in /etc/nginx/nginx.conf?

    what is worker_connections x; set to in /etc/nginx/nginx.conf?

  • Well, you can probably get a bit more perf out of it by using ngx_http_gzip_static along with pre-gzipped content.

  • gzip is a good idea in general but I don't know if it'll help in this instance @Rallias. If the CPU is the bottleneck it certainly won't make things much better.

    You have cache control on OP?

  • It's *Optimizing

  • InfinityInfinity Member, Host Rep

    @Jeffrey said:
    It's *Optimizing

    In American English, but not everyone is American you dipshit.

    Thanked by 2chrisp eingress
  • AThomasHowe said: If the CPU is the bottleneck

    You don't know what ngx_http_gzip_static does, do you?

    Thanked by 1AThomasHowe
  • AThomasHoweAThomasHowe Member
    edited June 2014

    Rallias said: You don't know what ngx_http_gzip_static does, do you?

    No I don't, TIL. To be honest I don't know much about the non-default modules in nginx, it's a pain to recompile. I just saw gzip in your post.

  • Infinity said: In American English, but not everyone is American you dipshit.

    Didn't actually think of any other countries when I read it wrong, just shows how selfish us Americans are. :)

    Thanked by 2Infinity c0y
  • ProfforgProfforg Member
    edited June 2014

    Rallias said: You don't know what ngx_http_gzip_static does, do you?

    Nginx will have to decompress these files sometimes, in case if user browser doesn't support gzip.

  • RalliasRallias Member
    edited June 2014

    AThomasHowe said: No I don't, TIL. To be honest I don't know much about the non-default modules in nginx, it's a pain to recompile. I just saw gzip in your post.

    Then don't recompile. Use nginx-core, nginx-full, nginx-naxsi, or nginx-extras

    Profforg said: Nginx will have to decompress these files sometimes, in case if user browser doesn't support gzip.

    Nope. That's only the case if you use ngx_http_gunzip

    Thanked by 1ironhide
  • Rallias said: Then don't recompile. Use nginx-core, nginx-full, nginx-naxsi, or nginx-extras

    Those are just in the debian repos though, right? They're not in the CentOS nginx.org repos I don't think. Not that I use anything Red Hat based.

  • AThomasHowe said: They're not in the CentOS nginx.org repos I don't think.

    Well then that's their fault for using a decrepit OS

  • Rallias said: Well then that's their fault for using a decrepit OS

    Have you seen some of the decrepit old Red Hat sys admins? They can barely see the terminal anymore, never mind learn something new ;)

    don't crucify me with your walking sticks red hat fans

  • alexhalexh Member

    @Rallias said:
    Well then that's their fault for using a decrepit OS

    I believe nginx packages are available from EPEL. gzip_static does significantly reduce system load, but it can be tricky to configure. From what I remember, you create a cron job to pre-compress static files in desired directories; nginx serves these pre-compressed files as opposed to compressing them as they're requested.

  • AleksZAleksZ Member

    I am sure its not quite updated, but you will get an idea, what you need to change in nginx AND system (OS level) settings
    http://dak1n1.com/blog/12-nginx-performance-tuning

  • OpenVZ VPSes tend to have poor network performance as they share the kernel with the host server, as well as connections.

    I had experience to run a 200req/s service on an OpenVZ and the host server crashed after running for some hours, while it only uses 2% of CPU resource when I later switch to an i5 dedi.

    As you didn't mention what type of server you are running (if I didn't miss that), I suggest you switch to KVM if you are on VZ.

  • zserozsero Member

    It's an iwstack 1GB instance, KVM, 2 vCPU. CPU usage is something like 5%, it cannot be the bottleneck.

    Thanks for all the tips in this thread, I'll start with the open file cache and try all other tweaks mentioned.

  • zserozsero Member
    edited June 2014

    Also, I'm using nginx-light 1.6.0 from dotdeb, Debian 7 netinst 64-bit.

    Thanked by 1ErawanArifNugroho
  • zsero said: Also, I'm using nginx-light 1.6.0 from dotdeb, Debian 7 netinst 64-bit.

    Yeah, so you should have ngx_http_gzip_static

  • zserozsero Member
    edited June 2014

    Thanks for the comments in this thread and other articles as well as the following articles
    http://dak1n1.com/blog/12-nginx-performance-tuning
    http://www.slashroot.in/nginx-web-server-performance-tuning-how-to-do-it
    https://news.ycombinator.com/item?id=6748767
    http://blog.zachorr.com/nginx-setup/

    I've come up with a config which works with 8000 concurrent connections on this iwstack VPS. Here is the config:

    user www-data;
    worker_processes auto;
    pid /run/nginx.pid;
    worker_rlimit_nofile 65000;
    
    events {
        worker_connections 8000;
        multi_accept on;
        use epoll;
    }
    
    http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
    
        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;
    
        keepalive_timeout 20;
        client_header_timeout 20;
        client_body_timeout 20;
        reset_timedout_connection on;
        send_timeout 20;
    
        keepalive_requests 100000;
    
        include /etc/nginx/mime.types;
        default_type text/html;
        charset UTF-8;
    
        open_file_cache max=65000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;
    
        gzip off;
        include /etc/nginx/sites-enabled/*;
    }
    

    I've also needed to add

    root        hard    nofile  40000
    root        soft    nofile  40000
    www-data    hard    nofile  40000
    www-data    soft    nofile  40000
    

    to /etc/security/limits.conf

  • AleksZAleksZ Member

    you probably need to change /etc/sysctl.conf too

  • zserozsero Member

    @AleksZ said:
    you probably need to change /etc/sysctl.conf too

    I've read in the guides that it's a quite low level system tweaking and isn't needed for the usage I'm expecting. What kind of settings do you believe I should use?

  • AleksZAleksZ Member
    edited June 2014

    something like this


    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    net.core.netdev_max_backlog = 30000
    net.ipv4.tcp_congestion_control=htcp
    net.ipv4.tcp_mtu_probing=1
    .

Sign In or Register to comment.