Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


OpenVZ migration without IP change
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

OpenVZ migration without IP change

joshuatlyjoshuatly Member
edited December 2012 in Help

Hi,

Would like to know if there are ways to live migrate a OpenVZ box to another node without IP change to the VPS?
Thanks in advance.

Comments

  • It has nothing to do with OpenVZ.
    If you mean something like migrating from one provider to another the short answer is "No".

  • IP pooling.

  • If all your servers are in the same VLan then yes. If not, then no.

    We have all our German servers in the same VLan so we can move them all without any issues

  • vzmigrate has an --online switch for live migration, though it tends to work only 90% of the time:

    http://wiki.openvz.org/Migration_from_one_HN_to_another

    http://wiki.openvz.org/Man/vzmigrate.8

  • If it's a different location, (I think I know why you're asking) it will most certainly have new routes, and ISPs. So the networking is different and requires a different IP.

  • JacobJacob Member
    edited December 2012

    VLAN.

  • OliverOliver Member, Host Rep

    @Damian said: vzmigrate has an --online switch for live migration, though it tends to work only 90% of the time:

    Everything everyone else I have said but I am chiming in to say that vzmigrate with the online option hasn't always worked for me... I've seen it crash the entire system on the destination node. :-(

    If the container isn't huge and the two nodes are connected via a fast link then it's best to just shut it down, vzdump it, then import it on the new node.

  • @Oliver said: best to just shut it down, vzdump it, then import it on the new node.

    Extra work that isn't needed, I get same results, and never fails with

    vzmigrate -v --rsync=-vz --keep-dst target ctid

  • @miTgiB said: Extra work that isn't needed, I get same results, and never fails with

    vzmigrate -v --rsync=-vz --keep-dst target ctid

    doesn't the vzmigrate utility do that on the backend anyway?

  • @Corey said: doesn't the vzmigrate utility do that on the backend anyway?

    No, with @Oliver using vzdump, he is waiting for an archive to be made, then transferred, then restored, with vzmigrate, it uses a 2 pass rsync transfer, once with the VPS running, then the VPS is stopped and a 2nd rsync pass to grab any changed files, and the the VPS is started on the remote node and the local source is removed. So most times the actual downtime is much less.

  • nstormnstorm Member
    edited December 2012

    @miTgiB he said to shutdown it first, then it will do it in 1 pass. But it's still the same as vzmigrate on stopped ct, only it's a bit more automated.

    EDIT: But this discussion are already aren't what ts was asking for.

  • OliverOliver Member, Host Rep

    Sorry I meant shut it down and vzdump it. Don't use vzmigrate at all. Just copy the .tar or .tar.gz file over the network then reimport it. This can be done without vzmigrate at all.

  • DamianDamian Member
    edited December 2012

    @Oliver said: Sorry I meant shut it down and vzdump it. Don't use vzmigrate at all. Just copy the .tar or .tar.gz file over the network then reimport it. This can be done without vzmigrate at all.

    vzmigrate in "normal" mode works fine for us; we've never had an issue with it failing or crashing the dest node or losing customer data or anything like that. There've been times where we have moved several hundred containers with it too.

    vzmigrate works by:
    -rsyncing all of the client data to the dest node
    -stopping the container on the source node
    -rsyncing all of the changed data (changed between start of vzmigrate and stoppage of container)
    -starting container on dest node

    Total downtime usually works out to be a minute or less, since there's usually very little that's changed.

    ====

    We've had extremely poor experiences with vzdump. It seems to create dump files that are always a completely different size than the one before. We've stopped using it; I haven't looked into why it doesn't work, since vzmigrate works without issue for us.

  • @Corey said: doesn't the vzmigrate utility do that on the backend anyway?

    Not all the added flags I pass. I use --keep-dst as when migrating between a EL5<>EL6 node, strange things happen and the transfer fails on some minor issue but will wipe the destination data. this way, just shut the source and scp the conf for the container and start on the destination node.

  • @miTgiB said: Not all the added flags I pass. I use --keep-dst as when migrating between a EL5<>EL6 node, strange things happen and the transfer fails on some minor issue but will wipe the destination data. this way, just shut the source and scp the conf for the container and start on the destination node.

    Yea I was talking about the rsync stuff :)

  • @Corey said: Yea I was talking about the rsync stuff :)

    not -v to rsync, and I like more info

  • OliverOliver Member, Host Rep

    Hmm.. Interesting to hear such mixed experiences with vzmigrate and vzdump. I am just conservative and prefer to have one container offline for slightly longer if it means there is no risk of it crashing an entire node and affecting dozens of other customers. :-P

  • prometeusprometeus Member, Host Rep

    @Oliver said: Hmm.. Interesting to hear such mixed experiences with vzmigrate and vzdump. I am just conservative and prefer to have one container offline for slightly longer if it means there is no risk of it crashing an entire node and affecting dozens of other customers. :-P

    Some moths ago I crashed the target node with a live migration, so I stopped to use it fo a while. Since then however there were several update and the latest kernels are much more robust in this field. However some days ago I live migrated something like 60 vps and had a report of a vps left in a bad status. so to not give a few minutes of downtime I caused a several hours outage for that vps :-(

  • I also had a node crash in the past when doing a live migration (probably tun/tap related), so since stopped doing live migration and do it offline instead. This simply means a few seconds downtime and a reboot of the VPS in question.

Sign In or Register to comment.