Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Advertise on LowEndTalk.com
Opinions about building a two-node HA virtualization cluster
New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

Opinions about building a two-node HA virtualization cluster

Hey everyone

I'm currently looking into building a two-node HA virtualization cluster without any kind of SAN storage and found various options for that:

  • Proxmox Cluster (KVM) with DRBD
  • XenServer with Starwind VirtualSAN (looks a bit fiddly)
  • HyperV with Starwind VirtualSAN
  • LXC Cluster with LCMC, Pacemaker, Heartbeat and DRBD

I'm mainly looking for opinions which help me deciding what I should pick. So far Proxmox and LXC sound like the easiest way to do that without a lot of ugly tweaks. Or are there even better solutions? The link between those two nodes has almost no latency, is reliable and fencing can be easily implemented. (iDRAC, iLO) The cluster should be active-active and in case of failure, the VMs should be automatically migrated.

Best regards
NeoXiD

SnapServ Mathis - Your cheap and reliable RIPE Sponsoring LIR. Use coupon code LET2017 to get a recurring discount of 10% on our products!

Comments

  • perennateperennate Member, Provider
    edited December 2015

    Two-node isn't very useful since there's always a possibility of split-brain. But if needed I'd say KVM (or anything) on drbd and whatever cluster management software you like.

    Edit: oh, I guess fencing solves that mostly, although that can cause other problems sometimes since the nodes can't wait to get a majority quorum before fencing

    Thanked by 1NeoXiD
  • NeoXiDNeoXiD Member
    edited December 2015

    @perennate said:
    Two-node isn't very useful since there's always a possibility of split-brain. But if needed I'd say KVM (or anything) on drbd and whatever cluster management software you like.
    Edit: oh, I guess fencing solves that mostly, although that can cause other problems sometimes since the nodes can't wait to get a majority quorum before fencing

    As long as the link between those two servers is reliable, split-brains shouldn't occur. The nodes can check eachother via iLO/iDRAC to get the current state. But ofcourse, there's always a risk left. Also had the idea of deploying separated containers on both hostnodes, deploying a primary/primary GlusterFS setup and then using application-specific HA. (e.g.: HAProxy & nginx, HAProxy & ejabberd, MySQL with master/master replication, ...)

    It just isn't as beautiful as VMs which just migrate to eachother automatically, but it would solve my fear of split-brain, as it's not hard at all to resolve split-brain situations with GlusterFS. Would still love to hear other opinions about the whole thing and/or even own experiences.

    SnapServ Mathis - Your cheap and reliable RIPE Sponsoring LIR. Use coupon code LET2017 to get a recurring discount of 10% on our products!

  • perennateperennate Member, Provider

    I recommend going with the software approach instead of trying to get HA at the virtual machine level. It's a lot easier to solve problems and recover from issues when things go wrong. With drbd it can often be hard to debug performance problems since there's so many levels that are stacked on top of each other. You can also easily test whether things are working by turning off one of the VMs or just stopping an application on one VM.

    Thanked by 1NeoXiD
  • Are you pursuing the HA for the VM, or the HA for the virtualization platform? I see very little value for the latter for only two nodes.

    Thanked by 1NeoXiD

    My site, powered by Netlify and Let's Encrypt.

  • BharatBBharatB Member, Provider

    For 2 nodes I wouldn't suggest doing all that hardwork. Lets say dealing with 6 nodes then going openstack would require some brain storming but works well in the end.

    Thanked by 1NeoXiD

    Readydedis, LLC - Managed Dedicated Servers

  • RadiRadi Member, Provider

    Xen with Remus?

    Thanked by 1NeoXiD

    4 GB RAM/90 GB SSD/4 TB Traffic/KVM/1 IPv4 for $7/mo only here with coupon code "LET-It-GO".

  • NeoXiD said: Proxmox Cluster (KVM) with DRBD

    NeoXiD said: LXC Cluster with LCMC, Pacemaker, Heartbeat and DRBD

    Proxmox can do either of these, but Ceph is going to require 3 node minimum to reach quorum. Everything is built into Proxmox and their wiki is rich in detail, and then their forum will go to great lengths to help if you get stuck, add on they do offer a paid support option as well.

    So other than Ceph requiring 3 nodes for DRBD, Proxmox is probably the easiest road to travel. Unless you already has 10G network setup, you might want to look deeper into the other options, unless you are willing to invest a good chunk into a lab build.

    Thanked by 1NeoXiD
    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • NeoXiD said: I'm currently looking into building a two-node HA virtualization cluster without any kind of SAN storage and found various options for that:

    Depends what you need to share between nodes.

    A single static site? Trivial.

    A database-based site where data needs to be eventually consistent but it's OK if any given viewer misses the latest blog comment? More than trivial but easy.

    A transactional database where every node needs to always have the same information? Much more difficult.

    Thanked by 1NeoXiD

    For LET support, please visit the support desk.

  • Just make a cronjob to scp vm's between servers. With only 2 there's not much possibility for a good HA system. A third system is required in most cases

    Thanked by 1NeoXiD

    ...
    ...

  • perennateperennate Member, Provider

    Radi said: Xen with Remus?

    Remus isn't actively maintained, and there are few users. It seems like a bad idea to run production VM with it.

    Thanked by 1NeoXiD
  • Just as a short update, I'm right now evaluating XenServer together with HALizard. Website looks terrible, but product is running solid so far, has several good reviews and the community still seems to be active.

    The services won't be so critical that a third node would make sense, so I can definitely live with some downtime. I'm also going to use it for e.g. hypervisor upgrades without downtime (migrating services away) and similar tasks.

    Will update here once I know a bit more, maybe it will be useful for someone else aswell. Thanks for all opinions and suggestions so far!

    SnapServ Mathis - Your cheap and reliable RIPE Sponsoring LIR. Use coupon code LET2017 to get a recurring discount of 10% on our products!

  • +1 to "HyperV with Starwind VirtualSAN" option. I have trialed it for a month having a great experience so far. Now I am just trying to refund our recklessly bought NAS and go for Starwind.

  • @NeoXiD said:
    The services won't be so critical that a third node would make sense, so I can definitely live with some downtime.

    third node (in many HA systems) doesnt need to be a resource provider - its just there to make the voting quorum. if the two nodes lose connectivity for your defined period they will both switch to master. the quorum computer will (should) resolve which one should stay the master.

  • WilliamWilliam Member, Provider

    First choice: Proxmox + ZFS + iSCSI.

    Second choice: Proxmox + CEPH

    DRBD is slow.

  • miTgiB said: Ceph is going to require 3 node minimum to reach quorum.

    Ceph is also going to provide very low comparative performance unless you invest in some decent NVMe SSDs.

    Thanked by 1perennate
Sign In or Register to comment.