Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Urgent HAProxy / Varnish-Nginx-Apache help needed
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Urgent HAProxy / Varnish-Nginx-Apache help needed

jhjh Member
edited December 2012 in Help

I have recently had a bit of luck in picking up a client who is meeting a large client tomorrow morning with some benchmarks of an optimised setup on Virt.IO, but unfortunately we still don't have any benchmarks as the pages are timing out. This is urgent work, so I can offer a small bounty in the form of either cash or LEB to anyone who comes up with the answer in good time.

The current setup is as follows:

Small VM running HAProxy as a load balancer
2x Medium VMs running Varnish on Port 80, Nginx on port 8080, in front of Apache.
1x Large VM running MySQL though this isn't currently in use as the pages we're testing with at the moment are static.

Currently, I can curl the public IP of the load balancer from the load balancer itself and it will get the Nginx default page, which is what we want at this stage.

From anywhere else, it takes several seconds and times out. I have tried to activate logging on HAProxy to syslogd but nothing is getting logged (see below).

haproxy.cfg:

global
        #uid 99
        #gid 99
        daemon
    stats socket /var/run/haproxy.stat mode 600 level admin
        maxconn 40000
        ulimit-n 81000
        log 127.0.0.1 local0 notice
        log 127.0.0.1 local1
pidfile /var/run/haproxy.pid
        defaults
        mode    http
        contimeout  4000
        clitimeout  42000
        srvtimeout  43000
        balance roundrobin
listen webfarm 91.227.223.61:80
           mode http
           stats enable
           stats auth admin:password
           balance roundrobin
           cookie web insert indirect nocache
           option httpclose
           option forwardfor
           option httpchk HEAD /check.txt HTTP/1.0
           server webA 91.227.223.58:80 cookie A check
           server webB 91.227.221.121:80 cookie B check
           option          forwardfor except 91.227.223.61
           reqadd          X-Forwarded-Proto:\ https
           reqadd          FRONT_END_HTTPS:\ on
           log     global
listen  stats :7777
        stats enable
        stats uri /
        option httpclose
        stats auth admin:password

rsyslog.conf:

...
# Save HA-Proxy logs
local0.*                                                /var/log/haproxy_0.log
local1.*                                                /var/log/haproxy_1.log

la -lah /var/log

...
-rw-------  1 root      root        0 Dec 23 12:29 haproxy
-rw-r--r--  1 root      root        0 Dec 23 20:13 haproxy_0.log
-rw-r--r--  1 root      root        0 Dec 23 20:13 haproxy_1.log
...

Let me know if anyone would like to see anything else.

Thanks in advance.

«1

Comments

  • Well, I'd do a tear down on this first.

    Varnish can be a pain in the arse at times. Have you checked the server ulimits? Make sure you have everything cranked up as requests here are traversing 4 different things per items requested/connection.

    Workflow looks like:
    HAProxy ---> Varnish on port 80 --> Nginx port 8080 ---> Apache port ??

    Do this first:
    HAProxy --> Apache

    test if working.

    Then do this:
    HAProxy --> Nginx --> Apache

    Finally reintroduce Varnish where it belong.

    Unsure why you are doubling up on Varnish and Nginx in one install like this. Typically we only break Varnish out when we have a big dedicated server chunk just for Varnish (i.e. 8-16GB of RAM dedicated to Varnish). Not being critical, folks have special needs sometimes to justify.

    Also check the Nginx proxy setting to make sure you have reasonable/low timeouts for Apache.

  • jhjh Member

    Sorry I made a mistake, there's no Apache, it's PHP5-FPM.

    Each of the servers above has several GB of memory for caching alone, hence Varnish.

    Just tried disabling Varnish and running Nginx on port 80, same problem.

  • So, all these services function by themselves if you access them on a port directly?

    If you can't get the PHP page from Nginx itself, then I'd review the PHP5-FPM install and make sure the port it is installed on is the same as in nginx.conf. Have had that problem occur before.

    Nginx + PHP is a fairly straightforward install, typically.

    So,
    Nginx port 80 ---> PHP5-FPM

    that timesout on a default or placeholder style php page? Check the PHP logs, if nothing then almost certainly a port/mapping issue.

  • BTW: Not to derail conversation, but the HAProxy along with Varnish in this instance seems to be unnecessary.

    The spare memory can be put to use doing cache within Nginx or another instance of Nginx running. Easier likely to maintain Nginx than Varnish (my experience).

    The HAProxy also seems unnecessary since we are doing A-B round robin. Nginx can do that very simply.

    Gets complicated when any one of these pieces of software breaks now or in the future.

  • jhjh Member

    [@pubcrawler said]

    Not actually using PHP at the moment, just trying to get the static Nginx default page to show up.

    Eventually we will be passing to the server with the least connections, and using loadbalancer.org's software to monitor the connections and the uptime of each server etc. Not sure if the same can be achieved with something else but for now it would be nice just to get this working as it's what I've sold the client on.

  • What exactly you are trying to achieve ? If you want to load balance 2 webservers with a MYSQL cluster you are getting wrong buddy.

  • @jhadley,

    Nginx should out of box install with default page returned when you access it via the IP or any DNS related to it.

    I'd check the install candidate version and do a fresh install of current Nginx and test it until you get just it to work. Then build up from there.

    Hard to say what is wrong, but bad install possible or file not existing for Nginx to serve default page. Caching is another issue (try accessing the page via remote proxy). Lots of funk around Nginx install on Debian/Ubuntu we use and distro shipping ancient version.

    I've seen your problem as described 1-2 times. Check nginx logs to see if anything is actually getting there and logged. Make sure nginx process is also running. Startup might not have run it (obvious, but debugging is what it is).

  • jhjh Member

    @pubcrawler

    The Nginx page works fine when it is loaded directly from the web server, but not through HAProxy, when it simply times out.

    The Nginx logs are pretty much empty, as if the connections aren't ever getting through to the web server.

  • I'd revise this configuration when perfected to this:

    Small VPS running Nginx in PROXY mode (this is where connections / public comes in and out of)
    2x Medium VMs running Nginx on port 80, in front of PHP5-FPM

    Caching you intended stuff into Nginx, even if a separate instance on the 2x medium VMs.

    Eliminate HAProxy, eliminate Varnish.

    The MySQL VM, make sure it has a private VLAN interface and the 2x Medium VMs also have private VLAN. Seen this before where all done on public interface due to config oversight and performance suffers greatly.

    If you have some compelling tons of objects needed to cache (like a photo hosting site) then put a Varnish container (new VM up front before everything).

  • jhjh Member
    edited December 2012

    @pubcrawler said: Small VPS running Nginx in PROXY mode (this is where connections / public comes in and out of)

    Does it support this:

    @jhadley said: passing to the server with the least connections, and using heartbeat to monitor the connections and the uptime of each server and eventually doing the load balancing with a pair of servers in case one goes down.

    ?

  • Yes, Nginx runs great as a load balancer / Proxy. It is mainly what I use it for :)

    As far as heartbeat and load distribution, let me look.

  • Yes, I totally agree. Half of the world using NGINX is using it as load balancer, rest as reverse proxy and others as webserver.

  • Keepalived would be a work around to accomplish the same functionality with Nginx (heatbeat).

    Uptime in Nginx is on the fly. It flags bad backends in the pool and you have control over what a failure is what to do with that state.

    See this about a keepalived implementation:
    http://evolution.voxeo.com/wiki/kb:swloadbalancingfailover

  • there is least connections config functionality in Nginx, although documentation sucks should get you searching in right place:

    http://wiki.nginx.org/HttpUpstreamModule

  • I think he must clear what exactly he wants to achieve in the first place. Combining modules like this would be confusing.

  • chihcherngchihcherng Veteran
    edited December 2012

    @jhadley said: Small VM running HAProxy as a load balancer

    Virtualization seems to hurt HAProxy's performance really bad. Please refer to:

    Hypervisors virtual network performance comparison from a Virtualized load-balancer point of view

    By the way, Microsoft's HyperV seems to perform quite well among different hypervisors.

    If you must run HAProxy, it would be better to run it on a low end dedicated server than within a VM:

    Re: nginx alone performs x2 than haproxy->nginx

  • Keepalived may or may not be what you need.

    If you are intending on balancing a large slow dynamic app load, then it won't be. Keepalived tends to be a failover catch.

    In the case of the big slow dynamic app load, strategy there is to determine why the slowness (if slowness exists - slow queries, complicated app logic or just plain high activity). Optimize.

    Have to watch amount of activity in such a solution on a shared VPS resource. Sounds like the site might already be large and experiencing potential bottlenecks wherever it lives now. Sites like these I throw on dedicated servers. Every time we route anything through in this solution it's multiplying resource consumption on sockets and quickly.

    That's why I said yank Varnish and simplify.

    Umm beyond that, if you are in the VPS business, consider a dedicated virtualized server for the customer, would be prudent considering it seems to be a large customer.

  • Not trying to steer you away, but I've received amazingly fast answers from nginx mailing list in regards to nginx specific questions.

  • This is a module for Nginx, unsure of current support, inclusion, etc. Do the foot work to determine if interested and suitable. Seems to deal with your interest in load specific balancing based on backends idleness or lack thereof:

    "The fair load balancer module for Nginx (upstream_fair)"

    upstream_fair is a load balancer module for the fantastic Nginx web server. It implements somewhat smarter logic than the built in pure round-robin load balancer and may be better suited to diverse workloads (a mix of fast and slow pages) than the stock balancer.

    The main feature of upstream_fair is that it knows how many requests each backend is processing (a backend is simply one of the servers, among which the load balancer has to make its choice). Thus it can make a more informed scheduling decision and avoid sending further requests to already busy backends.

    see: http://nginx.localdomain.pl/wiki/UpstreamFair

  • @bamn, I second that.

    Varnish mailing list use to be quite the same. Unsure of lately.

  • @darknessends said: Combining modules like this would be confusing.

    No it won't...

  • jhjh Member
    edited December 2012

    Thanks everyone - getting a lot of really valuable input on nginx vs haproxy.

    The client wants to stick with HAProxy as it apparently performs better, however I have managed to get it working by deleting OnApp's loadbalancer.org application and installing HAProxy via aptitude and using the same config.

    The first round of benchmarks will simply be to see how many static pages per second we can get out of this as a theoretical maximum to present to the end client. Should be exciting!

  • Don't forget is session data is involved with visitors (shopping carts being most well known example) you need to use ip_hash functionality in Nginx balancer to get user back to right server on each request...

  • Comparisons of HAProxy and Nginx in a shared environment (VPS) are going to be rather problematic. Both products are great and are intended as low latency and high throughput. Those two items are somewhat diminished in a VPS environment and create other random problems.

    Amazed by clients that have technical preferences like that, but shop for vendors to implement pieces and parts of things. This stuff can get pretty complicated fast.

    Good luck with that client :)

  • eva2000eva2000 Veteran
    edited December 2012

    No problems using haproxy 1.5.x dev builds for a load balanced cluster with nginx 1.1 and 1.2 on my live centminmod.com web site across 4-5x LEB VPS (mainly openvz but 1x kvm VPS in there with primary, secondary and tertiary haproxy load balancer backups/failover).

    Also used haproxy 1.5.x dev builds for a haproxy -> nginx -> apache cluster of web servers for a large vBulletin forum client with a 15m post forum and 600,000 registered members and 4,000 vB users online

    Which version of haproxy you using ? stable or 1.5.x dev builds ?

    As to haproxy vs nginx, nginx will have better latency response time than haproxy but difference is <2-5%. But haproxy has better concurrency scaling performance - 400,000 concurrent unique users is a piece of cake with haproxy.

  • @jhadley said: The client wants to stick with HAProxy as it apparently performs better, however I have managed to get it working by deleting OnApp's loadbalancer.org application and installing HAProxy via aptitude and using the same config.

    ah didn't see that part.. OnApp's loadblanacer.org app ?

  • jhjh Member

    I'm using 1.4.18 now, I think the app uses something older though.

    @eva2000 said: OnApp's loadblanacer.org app ?

    Basically a small VM template that includes HAProxy, Pound, Heartbeat and a GUI for it all. Turned out it's not the best way to go.

  • Can you post the benchmarks along with the specs of the machines when you are done?

  • jhjh Member

    @gsrdgrdghd said: Can you post the benchmarks along with the specs of the machines when you are done?

    We've done a few tests on Blitz.io and can easy push what we've tried sending with no errors at all on the LB, up to about 550 hits per second. We seem to be getting a few errors on Blitz.io though which I think may be network-related (not sure at whose end as very little information is given).

    The specs of the machines are in this test case:

    512MB RAM / 5GB disk for the LB
    2x 3GB RAM / 20GB disk for the web servers
    6GB RAM / 60GB for the backend server which is currently unused

    I'm deloying another instant to try AB now to rule out these Blitz.io errors.

Sign In or Register to comment.