New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Hi Huluwa,
2 Nodes in the US.
Quite possible if you ordered them close together both your servers went on node 8.
Node 5 is also in the US.
Anthony.
Thx. I wish I can reach different node next time
>
Submit a ticket and ask to be moved.
>
Lets go ahead and highlight might.
Eventually!
Okay
Itburns
AUTOCORRECT
It burns
Morenipples
Not as amusing when you insert the pause between the words.
>
AUTOIGNORE
There's a script for that, you know. Of course, that robs you of the pretentious attempt at a smug reply
I'm curious if they'll do it. Xen migrations aren't fun.
I think so. Does it require DD?
XenPV is a simple move, just stop the vm, mount it's disk inside Dom0 and rsync the filesystem to a similarly mounted filesystem on the other node. Xen HVM is similar to KVM and using dd
I've never heard or seen this method before. Xen.org says to go the DD route.
Geeks love the hard way, those of us who take lazy to a whole new level are the real thinkers
Yes we use the secondary drive mount method and sync the data as well. You can't beat it with a stick.
@miTgiB @speckl By chance do either of you have link to that method? Also, any easy way to expand disk space without requiring a reboot and fsck on the DomU?
Probably something on the lxcenter.org site, it was the recommended way to migrate XenPV in the HyperVM days
@KuJoe we use a method similar to this.
http://zhigang.org/blog/accessing-data-in-raw-disk-image/
However, since we mainly work in the AppLogic Cloud environment, then our steps are a little different, but that article should steer you in the right direction.
@KuJoe -- from a client's perspective, Xen PV (with pygrub and across the same hypervisor version) has been as simple as tar, create new PV, download tar to root, rm -rf everything and untar (with a static tar and preserve permissions flag set). Which
"syncs" with what @mitgib said
The above/rsync method requires nothing more than that the target have at least as much space as the untar-ed size -- can be as large as you want.
For KVM/HVM I use
fsarchiver
with a simple dd dump of the first 512 bytes, restoring everything from a rescue environment (does require target disk to be equal to or larger than source image). Resizing with gparted is then a cinch.haha wow, this thread grew a bit
If you want a VPS on the other node it will take all of 10 seconds to sort out, if you also want your data moving it is simple enough it just takes a bit longer, I scripted the xen migrations about a year ago, its no big deal either way.
The other option I usually offer people that don't want any down time is just to run another VPS in parallel on another node let you migrate your own data then just switch the IP over to the new VPS on the other node and remove the old one after 24 hours.
Either way you want to play it is fine with me, just drop a ticket in if you want to go ahead and I will get it scheduled for you
@miTgiB you don't need to stop the domu if you have some space in the volume group. Take a snapshot, mount this in the dom0 and start synchronization when finished the first run release the snapshot, take a second one resync the change (i do this to reduce the downtime to the minimum) then stop, run the last sync mounting the real lv and when finished start the domu on the destination node.
Using migration you can suspend the domu on the first node and resume on the new node (the downtime is technically only a time hole)
I've found without stopping the VM while taking snapshots to cause all sorts of mysql fun