Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Plan to keep a website online when 1 server is down.
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Plan to keep a website online when 1 server is down.

Hello everyone!

I'd like to pick your brain on how to achieve a satisfatory solution to a particular problem.
It's a fact of live that servers will inevitably be down sometime in the future, for whatever reason.

I wonder how you handle this.
Here are the scenarios.

a) Website hosted on cPanel with no access to DNS records.
b) Website hosted in a VPS, again no access to DNS records.

With access to the DNS records, I guess that using DNSmadeeasy would be the solution, even if kind of expensive (is there an alternative?)

But what about when you can't access the DNS records to update the website IP?
I did thought of using Scaleway, as they allow to detach and attach the IP at will.

Any suggestions?

«1

Comments

  • NeoonNeoon Community Contributor, Veteran
    edited December 2017

    If its not the same DC, where you can play around with Failover or Floating IP's, the only option would be DNS. Especially when you use shared hosting.

    You could get your own /24 and do Anycast but thats gonna be expensive and does require some knowledge about BGP and stuff.

    Just get a DNS provider which offers an API, so you can move DNS records around.

    You can connect these to Uptime Robot/Statuskake for failover.

  • MasonRMasonR Community Contributor

    Use cloudflare dns. If you detect the VPS hosting your website is down, use their api to update the dns record to a different ip.

  • Route 53 has very good DNS failover if that's what you're looking for

    If you can't modify the address that the DNS record is pointing towards, then there's little you can do if that address goes down. You could point the record to a reverse proxy with nginx or similar, and if your main website goes down just update it to point to a backup server?

  • AnthonySmithAnthonySmith Member, Patron Provider

    404error said: But what about when you can't access the DNS records to update the website IP?

    I can't imagine any scenario whereby a website being up is important and I don't have access to the DNS records.

    But... assuming you can get the DNS updated somehow, create and haproxy cluster, point it at that, have the website running in multiple locations, haproxy can handle the failover if 1 server is offline until then you can just use it as a load balancer.

  • @AnthonySmith said:

    404error said: But what about when you can't access the DNS records to update the website IP?

    I can't imagine any scenario whereby a website being up is important and I don't have access to the DNS records.

    But... assuming you can get the DNS updated somehow, create and haproxy cluster, point it at that, have the website running in multiple locations, haproxy can handle the failover if 1 server is offline until then you can just use it as a load balancer.

    Looking into haproxy now. It does seem to be the solution to my problem.
    If I got it right, I would need to setup haproxy on it's own server (vps would do?).
    Then have haproxy to "know" that the website X is located at 2 diferent IPs.

    I would only need to update the DNS record once, updating the website record to point to haproxy.

    Is this the general gist?

  • @Neoon said:
    If its not the same DC, where you can play around with Failover or Floating IP's, the only option would be DNS. Especially when you use shared hosting.

    You could get your own /24 and do Anycast but thats gonna be expensive and does require some knowledge about BGP and stuff.

    Just get a DNS provider which offers an API, so you can move DNS records around.

    You can connect these to Uptime Robot/Statuskake for failover.

    Not the same DC, my idea is to actually make sure I have 2 or 3 shared accounts on providers using different DCs.
    To make sure that even if the DC gets connectivity problems, that it does not affect my website.

    Getting my own /24.. would over complicate and I would probably be the cause of downtime instead of the DCs or the server ;)

  • @MasonR said:
    Use cloudflare dns. If you detect the VPS hosting your website is down, use their api to update the dns record to a different ip.

    Can't do it, I can't take over the DNS management in any way. That's probably the biggets issue here.

  • MasonRMasonR Community Contributor

    @404error said:

    @MasonR said:
    Use cloudflare dns. If you detect the VPS hosting your website is down, use their api to update the dns record to a different ip.

    Can't do it, I can't take over the DNS management in any way. That's probably the biggets issue here.

    Why not? Who is the domain registered through?

  • Get a bunch of buyvm.net vpsses and host it on anycast @francisco

  • NeoonNeoon Community Contributor, Veteran

    @teamacc said:
    Get a bunch of buyvm.net vpsses and host it on anycast @francisco

    Same Network/ASN, single point of failure.

    Get at least 2 different DNS Providers with API + 2 Different Monitoring Providers and route the Traffic to your servers. Should work fine.

  • AnthonySmithAnthonySmith Member, Patron Provider

    404error said: Looking into haproxy now. It does seem to be the solution to my problem. If I got it right, I would need to setup haproxy on it's own server (vps would do?). Then have haproxy to "know" that the website X is located at 2 diferent IPs.

    I would only need to update the DNS record once, updating the website record to point to haproxy.

    Is this the general gist?

    yep that is the general Idea, honestly any VPS will do, it does not need much, I would suggest to keep life simple you get at least 256mb VPS and KVM though, if you can get it on a DDOS protected network then even better :)

    Thanked by 1404error
  • @404error said:

    Looking into haproxy now. It does seem to be the solution to my problem.

    Try to search for "high availability setup with heartbeat" on google. I'm no devops but a load balancer with heartbeat detection may be what you're looking for

    Thanked by 1404error
  • defaultdefault Veteran
    edited December 2017

    DNS round robin + 2 cheap VPS holding the same content (VPS2 updating from VPS1 automatically). This is LET, we should promote cheap solutions.

  • @default said:
    DNS round robin + 2 cheap VPS holding the same content (VPS2 updating from VPS1 automatically). This is LET, we should promote cheap solutions.

    According to https://en.wikipedia.org/wiki/Round-robin_DNS

    "Although easy to implement, round robin DNS has a number of drawbacks, such as those arising from record caching in the DNS hierarchy itself, as well as client-side address caching and reuse, the combination of which can be difficult to manage. Round robin DNS should not solely be relied upon for service availability. If a service at one of the addresses in the list fails, the DNS will continue to hand out that address and clients will still attempt to reach the inoperable service."

  • cfgguycfgguy Member, Host Rep

    Tell your client to point the domain to a custom cname and you control this cname.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited December 2017

    Yeah, that's why an haproxy server is a better option, you can have it handle round robin for you with automatic fail over and notify if one target is dead.

    perhaps buy a KVM VPS from @clouvider he has self-healing san based VPS services for about £5 p/month and they are DDOS protected, it's not to say it cant go down or have issues, it just has a lot more resilience.

    Thanked by 1Clouvider
  • And anyone has any idea on how to sync content and the database of those websites? Assuming that it is using cpanel shared hosting.

  • @yokowasis said:
    And anyone has any idea on how to sync content and the database of those websites? Assuming that it is using cpanel shared hosting.

    Sadly. there's currently no way to setup a way to replicate data between databases.

  • NeoonNeoon Community Contributor, Veteran

    But what when the HA proxy dies? The Downtime never ends.

  • @Neoon said:
    But what when the HA proxy dies? The Downtime never ends.

    dead man's switch at the datacenter

    Thanked by 1Aidan
  • defaultdefault Veteran
    edited December 2017

    @404error said:

    @default said:
    DNS round robin + 2 cheap VPS holding the same content (VPS2 updating from VPS1 automatically). This is LET, we should promote cheap solutions.

    According to https://en.wikipedia.org/wiki/Round-robin_DNS

    "Although easy to implement, round robin DNS has a number of drawbacks, such as those arising from record caching in the DNS hierarchy itself, as well as client-side address caching and reuse, the combination of which can be difficult to manage. Round robin DNS should not solely be relied upon for service availability. If a service at one of the addresses in the list fails, the DNS will continue to hand out that address and clients will still attempt to reach the inoperable service."

    I thought you wanted a service available with some cheap solutions (Low End Talk?) and redundancy, in which I case recommended and still recommend round robin. However, you seem to want 100% availability on both redundant solutions, with worst case scenario being the global catastrophe scenario while still keeping the service up. As Noon recommended "You could get your own /24 and do Anycast but thats gonna be expensive and does require some knowledge about BGP and stuff." Enjoy your money.

  • Just thought I'd leave this here as I enjoyed this post at the time and it's an interesting setup.
    https://forum.lowendspirit.com/viewtopic.php?id=745

    Thanked by 1404error
  • mfsmfs Banned, Member

    404error said: If a service at one of the addresses in the list fails, the DNS will continue to hand out that address and clients will still attempt to reach the inoperable service."

    This was addressed in a rather peculiar way by a VLC dev once.

  • @mfs said:

    404error said: If a service at one of the addresses in the list fails, the DNS will continue to hand out that address and clients will still attempt to reach the inoperable service."

    This was addressed in a rather peculiar way by a VLC dev once.

    This is certainly an interesting way of doing his, but falling back on TTL to do this is also quite arrogant. Even in the 90s, ISPs were caching zones longer than a few seconds. With my testing and even Google's Free DNS, they don't allow TTLs under 300sec (this may no longer be accurate, I haven't tried in a couple years).

    It's a neat solution, but it's still nowhere near as useful as a good old load balanced system, even if it's an external setup like ClodFartz.

  • mfsmfs Banned, Member
    edited December 2017

    WSS said: ISPs were caching zones longer than a few seconds

    well, it's not a "zero downtime" solution, and it's quite convoluted too. To be entirely honest most browsers should have learnt to skip dead A entries for quite a while* so it seems to me that "clever" moves could end up being even worse than just leaving multiple A/AAAA entries in round robin: web browsers/local devices should just cache all the possible entries and select them according to their availability. Attempts to "hide" or "reveal" entries "on the fly" should be better kept to disaster-scenario corner cases imho, or, for scenarios where something that ain't a web browser has to reliably resolve these domains

    Then, there are floating IPs, Anycast, infrastructures to monitor and double checks server's health and whatnot; most of times they won't be cheap and probably it's not even necessary to resort to them to support those $1000k/sec in ad revenue OP totally has from their site

    *pdf link; here's an excerpt, circa 2007

  • Sadly, having this ability built into the browser will directly remove control of handling it by yourself, and then you've got a mini-lookup problem on that, too. There really is no good solution for trying to handle resolving by yourself.

  • 404error404error Member
    edited December 2017

    I want to thank everyone for pitching in with ideas.
    Thank you!

  • @WSS said:
    Sadly. there's currently no way to setup a way to replicate data between databases.

    Found something for mariadb: https://mariadb.com/blog-tags/high-availability

  • @jezznar said:

    @WSS said:
    Sadly. there's currently no way to setup a way to replicate data between databases.

    Found something for mariadb: https://mariadb.com/blog-tags/high-availability

    Thanked by 1jezznar
  • @WSS said:

    @jezznar said:

    @WSS said:
    Sadly. there's currently no way to setup a way to replicate data between databases.

    Found something for mariadb: https://mariadb.com/blog-tags/high-availability

    XD that took me 5 minutes to figure out

Sign In or Register to comment.