New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Xen because it supports Full Virtualization (like KVM) for Windows,..., and Paravirtualization for supported OS (Linux, FreeBSD).. Paravirtualization is more efficient than full and it does not have that much restrictions like OpenVZ (you can even boot your own kernel).
KVM, that's definetely the best one.
I would say KVM is good .
Kvm all the way
KVM is the best..
Perhaps the last 4 comments should be a bit more detailed like the first one.
KVM is best. Everyone say. So me. Ramnode has! Best! < /sarcasm >
KVM is better and faster in my experience. What's more KVM machines are close to dedicated servers, XEN aren't and on XEN's kernel you can't install some things, ie. VirtualBox.
That's not entirely true. With Xen, you have 2 Modes. HVM is like KVM, full virtualized (Hardware Virtual Machine). You can install what you want.
The second mode (PV - Paravirtualization) allows you to run supported guest systems. With paravirtualization, not everything is virtualized. As an example, the memory management is done by the Hypervisor. That means, you can save resources and you will get more performance than with HVM/KVM. Since I'm using Xen, I didn't notice any restrictions (like as with OpenVZ).
I like PV because I dont see any reason why I should virtualize things like memory management multiple times (once for each VM). That costs a lot of performance.
KVM - paravirtualization is available with virtio drivers.
Yes, but as far as I know only for Net/Disk, right?
KVM.
In our benchmarks, KVM always outperformed XEN.
Also, KVM is probably more frequently updated. Just because of that, KVM is in my opinion currently (and probably for the future) the better solution.
XEN seems just like VMWare ESX / ESXi too "outdated" or heavily optimized for special environments which are most likely not being used in the VPS market.
KVM - I always get much better performance with KVM and you got companies like Dediserve and Linode even switching over from Xen to KVM
KVM. I had many XEN and KVM VPS before, but right now all of them are KVM
KVM - Better performance, stability and security (owing to faster and frequent updates).
Anyone have a benchmark over Windows on KVM and Windows on Xen HVM? I bet all KVM facts come from Linux/*nix experiences.
AFAIK, KVM always virtualizes the page table. There is necessarily a small performance penalty for that. Xen does not usually need that for Linux guests.
With regards to Windows Server, I have experienced better performance on KVM than on HVM. But well that's like 2 years ago and servers are from different providers, so they might not be directly comparable.
Ah yes, another thing that I might suggest, running a benchmark on a mix mode (where there are windows and linux guests) in a production state. I found that the I/O quite stable comparing to KVM. I experienced a sluggish environment when creating a 10-Windows KVM at the same time (compare to the same in Xen). But in opposite, KVM create quite fast in Linux
FYI, The debates on Xen vs KVM have been around for over 5 or 6 years, with no Winner at all.
CentOS 6 dropped Xen, while in CentOS 7, it equipped with Xen again
+1 for Xen
Thanks to you all.
From my previous researches, I found out that we can do live snapshots "online backups" for then guests but for KVM the guests should be turned off. Is that still correct?
Also, now with KVM you can do memory overcommit while in Xen this is not possible! Will that scare KVM users because of overselling probability?
I thought XEN supports ballooning and therefore also memory over-commitment.
XEN support memory ballooning for DomU, not 'systematically' for guest. Ballooning isn't the same as overselling. When you use Xen, when a VM created using certain memory capacity, yes you can possibly (but still though to do that) "lower it down" (balloon), but the remaining resources itself is settled and can't be used by other VM, not like KVM (the one that is most likely loved by most providers) despite the performance's discussion.
Performance wise for general use, KVM & XEN only divers slightly. However, many providers love XEN more because it has Nena's neunundneunzig luftballons :P
Xen is best for customer and KVM is best for VPS providers.
Xen: Performance for windows vps is very good.
KVM: If your hardware is not so good then it will be good choice for you.
I don't have to shut down my KVM vpses at my providers to take snapshots.
Both of Xen and KVM use the same LVM technology. So the snapshot will likely the same.
agree
+1
Has anyone tried comparing CentOS 6 benchmark, using Xen PV and Xen HVM.
From my experiences, when CentOS can only show 200MB/s in I/O test on Xen PV. Windows can beat up to 1000 MB/s, weirdly on Xen HVM
So it's merely mean that Windows (and any) VPS on XEN HVM is very very very good.
So it'll be a better idea to run a CentOS (or any Linux) on XEN HVM. What do you think?
Xen for management, its a type 1 hypervisor, KVM guests run in user space which I don't like, as a host that runs both I prefer management of Xen, its a lot harder for 1 guest VPS to ruin everyone elses day however I understand why customers like KVM, its easy to get started with.
Performance wise, xen 4.2+ for linux under pv beats KVM, KVM always beats Xen HVM 'out of the box' but there is not much in it.
for scale-ability, live migration, hardware support etc Xen wins hands down, amazon use it for a reason
As a Xen primary host i dont mind admitting that if I want a server from someone else with no f**cking around and nothing important I would pick KVM, but I see both sides of the coin and have for years, so if it is an important project i would pick Xen.
care to elaborate? I've seen everything you said implemented fine by either qemu directly, libvirt or other systems. My current node at home runs qemu 2.4 on kernel 4.1.1 (CentOS 7) and i can do live migration fine as long as i stay within the same CPU generation (due to flags on passthrough, with a more limited instruction set/flags i could migrate even between Intel and AMD).