All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
OpenVZ inside KVM
Hi all, I want review my infra after 5 months in BETA and now 1 month in production.
We have deployed all our OVZ infra, installing the OVZ kernel on a KVM VPS.
We use Proxmox as a CloudDataCenter software.
A picture: http://t.co/pZDW6Qlp
Before starting the project, on Q3-2012, I try to find some review, everybody says that I will receive very poor performance... but wait! I'm using SSD disks, so I say that I will try.
Now, 6 months later and around 250 containers running on 6 KVM nodes ( 3 dedicated servers with 2 KVM VPS each one), the performance is Ok and not any problem related stability, (CentOS 6 has full support for VirtIO drivers).
Virtualization inside virtualization = More scalability and more backup options.
Hardware problem? -> I will migrate the KVM VPS, and I'm migrating several containers, yeah!
Maybe you are interested to hear this.
Best regards!
Comments
I asked about this at some point and everybody said I should end up with poor performance and stability. Even though I felt that nobody had actually tried it.
Thanks for the informative post.
Glad it's working for you. Having spent some time testing the same theory it was ultimately my conclusion (and contrary to previous statements) that it was one more potential point of failure and both the benefits and dangers were based on "what if."
However, it's absolutely viable. Look forward to hearing more about your experience with it but I can't think of any questions to ask.
@jmginer Thanks for the information :-) Could you tell me what disk speeds you get on your host, KVM VPS and your OpenVZ VPS? (Just wondering what performance loss you have due to the virtualization layers).
I use Intel 520 SSD Series
This is the poor performance:
[root@ovz1 ~]#
[root@ovz1 ~]# vzctl enter 1552
entered into CT 1552
root@test-ovz1:/#
root@test-ovz1:/# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 5.91186 s, 182 MB/s
root@test-ovz1:/#
root@test-ovz1:/# rm -rf test
root@test-ovz1:/#
root@test-ovz1:/# exit
logout
exited from CT 1552
[root@ovz1 ~]#
Mmm, it's definately been tried and am pretty sure it is used by some providers. It works but the performance isn't as good. IO doesn't mean everything, and the node once it gets filled up and once you use multiple ovz installations on one physical servers the performance will go down dramatically.
Yes, we lost performance, around 50%, but still good as you can see. 180 MB/s is good acceptable.
If I run directly on the Proxmox node Intel 520 series return +300 MB/s
root@pveabs3:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.2975 s, 326 MB/s
root@pveabs3:~#
root@pveabs3:~# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
151 ovz1 running 14336 420.00 2316
154 ovz4 running 14336 420.00 2390
root@pveabs3:~#
Regards!
What would interest me is seeing ioping results inside an OVZ container and then from the host node. That would be where it would really make or break.
@jarland Check the IOping stats on the node.
During backups -> less 100ms in waiting on /vz partition
Not backup running -> less 30 ms in waiting
http://bestpic.es/image/1360766866.png
To verify inside the container, please, order one
http://www.lowendtalk.com/discussion/7975/spain-leb-1vcpu1ghz-256-mb-mem-2-gb-ssd-100-gb-traffic-35year#Item_17
Regards!
I mean like this
Around here it's become standard like the dd test to measure disk performance. Ideally it should be the same on both sides, just curious.
Munin's disk latency graphs != ioping results.
Also, is this software raid?
@jarland
[root@ovz1 ~]#
[root@ovz1 ~]# ioping -c 1 /vz
4096 bytes from /vz (ext4 /dev/mapper/vz-vz): request=1 time=0.5 ms
--- /vz (ext4 /dev/mapper/vz-vz) ioping statistics ---
1 requests completed in 0.6 ms, 2028 iops, 7.9 mb/s
min/avg/max/mdev = 0.5/0.5/0.5/0.0 ms
[root@ovz1 ~]#
[root@ovz1 ~]# vzctl enter 1552
entered into CT 1552
root@test-ovz1:/#
root@test-ovz1:/# ioping -c 1 /
4096 bytes from / (simfs /vz/private/1552): request=1 time=0.5 ms
--- / (simfs /vz/private/1552) ioping statistics ---
1 requests completed in 0.6 ms, 1876 iops, 7.3 mb/s
min/avg/max/mdev = 0.5/0.5/0.5/0.0 ms
root@test-ovz1:/# exit
logout
exited from CT 1552
[root@ovz1 ~]#
Doesn't seem bad.
How do you migrate a KVM VPS if using local storage?
interesting
Tried it and never had an issue with it, performance was absolutely fine for any real world application.
Getting IPv6 working with proxmox and NAT IPv4 was tricky but possible.
I never said it will be horrible, in fact ovz is not real virtualization.
I run frequently xen within KVMs both vanilla and XCP and while there are problems, they can be solved.
With OVZ it should be even better performance.
I think 50% performance loss is a bit much, could be tweaked further, imo.
Why stop there...go KVM -> Xen -> OvZ for the trifecta.
Well KVM well configured can be as little as 3% loss on native HW so why would OpenVZ inside KVM be anywhere close to 50%
Well, I and erawan run linux in windows with vmware and virtualbox on the windows server 2008RC2 LEB offer with only 512 ram. I had no lag except at install.
That is linux HN/KVM windows/vmware linux
I've personally never had great experiences with Intel SSDs.
It's funny I can install Proxmox in a Vmware VM but I can't install Vmware in a proxmox KVM..
Althought the KVM option in proxmox did not work in a Vmware VM, OVZ worked great.
Hehe... At first, after reading the thread tittle, I just remembered about running an Ubuntu Desktop inside a Windows 2008 KVM
doyou have debian os?
may set up is debian how ccan i set up my open vZ to debian thanks.. ineed your help.
Strictly speaking, OpenVZ isn't virtualization, it's container. So there's nothing that would prevent you from using OpenVZ from inside a true VM.
Since namespaces development in Linux has advanced significantly since 2001, perhaps using LXC would be still faster.
Necromancers
But ok
@david0923
You should learn about proxmox (version 3), is a Debian based platform to install OpenVZ.
Currently they are at the version 5, which doesn't support it, but works with LXC containers.