Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


xen or kvm - providers point of view?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

xen or kvm - providers point of view?

TazTaz Member
edited September 2012 in General

What would the provider pick when providing real virtualization service, xen or kvm?
My question is based on the questions below:
Signup you can ignore issue with centos 6;
In terms of management,
In terms of node overhead,
In terms of stability,
In terms of customer satisfaction?

Your inputs and explanations are highly appreciated.

«1

Comments

  • Why xen-pv would be better than kvm for clients?

  • InfinityInfinity Member, Host Rep

    @Jack said: However I am not to sure but I think KVM has templates to now?

    Yeah, SVM released it a few days ago in the stable version, was in beta for a month or so too.

  • @Jack said: XEN is a PITA when you receive attacks to find the IP under attack.

    iftop

    Thanked by 1TheHackBox
  • @Jack said: didn't work correctly when I tried it on a XEN node

    Xen and KVM both use bridged network, maybe read the man page so you start it with the proper switches

    Thanked by 1Mon5t3r
  • Xen in my opinion, granted it is biased towards Xen because i have little experience with KVM.

    Xen because, even if it is old, it's stable and "just works" most of the time. The node overhead is the RAM you have to set aside for the hyperviser, but depending on the node size, depends on the RAM you need.

    Customer satisfaction has been good for 2 years now that we have been using it. We still get sales and recommendations for using Xen.

    Just my 2 cents.

  • @jshinkle said: Xen because, even if it is old, it's stable and "just works" most of the time. The node overhead is the RAM you have to set aside for the hyperviser, but depending on the node size, depends on the RAM you need.

    Can you give an example please?

  • prometeusprometeus Member, Host Rep

    Both are good and stable imho :P

  • http://wiki.xen.org/wiki/Tuning dom0_mem

    I'm with the "it just works" thing as well. In my other company we run OpenVZ + Xen combination. Xen on CentOS6, pretty much "just works" for us since I kinda know someone who runs purely Xen and shares custom compiled kernels with me. We initally used Xen as a solution for customers who wanted Windows VPS, as well as some customers wanting to run VPN or other custom kernel stuff. I've probably ran it through CentOS 5.3 till date, so that's about 3-4 years of Xen.

    For OneAsiaHost I took the OpenVZ + KVM route since pretty much covers most of the ground. Xen still has it's place beside OpenVZ and KVM, but when the pricing is so close to KVM I might as well just offer KVM alone.

  • xen.. but its a ram eater, but i like xen. just set the dom0_mem to 512mb

    i vote for xen

  • Got a question, lets say on a server with 64gb ram, I want to setup 62 vps, will 2gb ram leftover be more then enough for xen?

  • it should be enough. when i maxed out my node there is only 512mb left

  • @Jack lets say something like 1tbx4 raid 10, xeon (forgot the model) those with 2.5ghz 8-12 core?

  • miTgiBmiTgiB Member
    edited September 2012

    @Taz_NinjaHawk said: 1tbx4 raid 10, xeon (forgot the model) those with 2.5ghz 8-12 core?

    Plenty of CPU, not even close to enough IO, 64gb of ram, you'll want 12-16 disk array. Sure it might work well if you get lucky and load the node with inactive users, but how likely is that? Also, 64gb/2tb usable disk, that's 32gb of disk per user if everyone gets 1gb ram, not much space at all :(

  • @miTgib

    Well I just trying to get an estimate. Node configs are most likely going to be

    e3-1230 ,16GB ECC, 4x1tb raid 10,
    Loading that with 25 VMS ( will use around 14 Gig Ram), Using KVM.

    What would be the drawback here?

  • I usually use 10 IOPS per VM as a safe gauge, 5 if I'm feeling stingy. 4x SATA drives in RAID10 deliver about 200 IOPS (100 per drive, x2 since the other 2 are mirrored).

    So 200/10 = 20 VMs, or 200/5 = 40 VMs. If your VMs are less active you can get by with maybe 50, but 4 drives is really cutting it close unless you're talking about 512MB each x 30VMs. That'll be pretty comfortable.

    Thanked by 1eLohkCalb
  • @Kenshin said: So 200/10 = 20 VMs, or 200/5 = 40 VMs. If your VMs are less active you can get by with maybe 50, but 4 drives is really cutting it close unless you're talking about 512MB each x 30VMs. That'll be pretty comfortable.

    What about sas15k? That should give enough breathing space :)

  • @Taz_NinjaHawk said: e3-1230 ,16GB ECC, 4x1tb raid 10

    This is the config of my E3 nodes, KVM I use the e3-1270 or e3-1230v2 and that will be great for your plans.

  • @Taz_NinjaHawk said: What about sas15k?

    Have you priced 15k SAS yet? Please be seated if not... Look at SSD caching over 15k SAS2

  • @miTgiB said: This is the config of my E3 nodes, KVM I use the e3-1270 or e3-1230v2 and that will be great for your plans.

    So I guess, this would be nice setup for kvm node if I decide to use e3-1230v2 and load up around 30 VM?

  • @miTgiB said: Have you priced 15k SAS yet? Please be seated if not... Look at SSD caching over 15k SAS2

    I will be most likely using 300GB sas15k. For small VMS, that would have enough diskspace.

    Tbh, I have never done SSD caching. So not sure how to set it up.

  • @Taz_NinjaHawk said: So I guess, this would be nice setup for kvm node if I decide to use e3-1230v2 and load up around 30 VM?

    Well, I base my loading of nodes on ram sold, so 16gb I would sell 14.5gb, 24gb I sell 22gb and I've never put 32gb into an E3 node as I see the CPU too close to max with 24gb in it.

  • @miTgiB said: Well, I base my loading of nodes on ram sold, so 16gb I would sell 14.5gb, 24gb I sell 22gb and I've never put 32gb into an E3 node as I see the CPU too close to max with 24gb in it.

    Makes a lot sense.

  • @Taz_NinjaHawk said: Makes a lot sense.

    I don't think it would be bad to put 32gb into a node, but I'd try to sell it knowing me, and the customer experience would not be what I would want. But the caching linux does naturally with ram surely would be helpful.

  • @Jack said: What disks , RAM do you use on your KVM's [mostly] as I know you said a while ago you had different nodes like an AMD x6 or something?

    My first KVM node was an AMD x6 1090T and when I moved from Rock Hill to Charlotte the difference in datacenter ambient temps was causing it to overheat in Charlotte so I converted it to an E3. The majority of my KVM nodes are E3's, either E3-1270 or E3-1230v2 with WD RE3/4's, Hitachi UltraStar's and Seagate Constellations and 3ware 9650 raid cards. I have 1 older KVM node with dual L5520's and 48gb ram, 8 WD RE4's and 9650 card, then the newest KVM node is dual E5-2620's 12 Toshiba 1tb SAS2 drives and LSI 9266-4i raid card with an Intel SAS expander (LSI based) with 128gb ram and a pair of Sansung 830's for SSD caching. I have a couple other E3 KVM nodes I used either the Toshiba or Seagate Constellation SAS2 drives in as well. All my nodes use Kingston or SuperTelant ram.

  • @Taz_NinjaHawk said: What about sas15k? That should give enough breathing space :)

    My prices are SG based, so they may be a bit different. Last year I bought nodes with 12x 300GB SAS 15k Seagates. Each costs S$400. Today's enterprise 1TB SATA drive costs S$130. 15k RPM drives do about 150-200 IOPS, 7.2k RPM about 100.

    Assuming 4 SAS drives, we're looking at 600GB capacity with 400 IOPS max and a S$1600 hole in the pocket. With 8 SATA drives we're looking at 4TB capacity with 400 IOPS and a S$1040 hole. If you need to squeeze into a 1U node, then SAS is your only choice. But assuming you can fit 8-12 drives, the 2U option, ideally with E5s, looks a lot more solid in the long run especially since people today are talking about capacity. 128GB servers aren't really expensive when you go the E5 route, but then again I'm talking about owned servers and not rented so it may differ for other providers.

    Assuming the 300GB 15k SAS Seagate prices have dipped over the year, I can now get the Intel 520 240GB SSD at similar price point (S$330). If I wanted to go the low storage capacity route, I'd rather do SSD and will probably never need to worry about IOPS. The 8-12 drive route is a no brainer for capacity + IOPS. Add the SSD caching as @miTgiB said to boost the IOPS while keeping capacity and you a nice balance. You can refer to my other thread where I posted the benchmarks on SSD as well as SSD caching.

  • @Kenshin said: If you need to squeeze into a 1U node, then SAS is your only choice.

    Seagate Constellation 7.2k 1tb 2.5" and SuperMicro makes an 8 bay 1U chassis just to add that much needed monkey wrench ;)

  • @miTgiB said: Seagate Constellation 7.2k 1tb 2.5" and SuperMicro makes an 8 bay 1U chassis just to add that much needed monkey wrench ;)

    I'd put 8x SSDs instead and blow everyone away. :D

  • @Kenshin said: blow everyone

    Ehhhhm. How much ;)

  • @Taz_NinjaHawk said: Ehhhhm. How much ;)

    Selective quoting, evil.

  • @Taz_NinjaHawk I would say that in order to be competitive you should offer both. We are running the entire shop, including internal servers on Xen PV. We have used KVM in the past for our shared hosting, but we found Xen to be easier to manage and it gave us more granular control over CPU usage.
    Here would be a short list of pros and cons for both:
    Xen PV pros:

    • With CentOS 6, Kernel 3.5+ and Xen 4.1 you get awesome performance
    • Very granular control over CPU usage. You can assign all cores on a server to each VPS, and still no one will be able to take over the CPU for any amount of time
    • Decent I/O on HDD, good I/O on HDD RAID-10, awesome I/O on SSD
    • Easy to create templates for it (you can make them in KVM and port them over - that's because I find Xen HVM such a P.O.S. compared to KVM).
    • easy to resize LVM partition inside SolusVM - client wants bigger VPS, it's done with a reboot, just like OpenVZ (well, you don't even need a reboot there).
    • maybe I forgot something, but this is what comes to mind at the moment

    Xen PV cons:

    • You can only run Linux distros that have support compiled in for paravirtualization - some FreeBSD versions might run on it, but it's to much work to maintain
    • RedHat dropped support for Xen so you will have to maintain everything yourself.
    • Ultra crappy performance on RHEL 5 / CentOS 5 with 2.6.18 kernel - if you want to run this for the official support then at the very least get a more up to date ElRepo.org kernel, along with Xen 4 - so that you can get close to CentOS 6 + kernel 3.5 performance.
    • My biggest gripe with it: the amount of RAM you loose to the Xen Hypervisor (which loads before the Linux Kernel on the host server) + the amount of RAM you have to give to Dom0. The amount of RAM consumed by the Xen Hypervisor grows exponentially with the amount of RAM installed on the server. On a 32GB server the Hypervisor will eat ~500MB, on a 48GB server closer to ~800MB, on a 64GB server closer to ~1GB and so on.
    • Networking can be a pain with Xen

    KVM pros:

    • You can run any x86 and x86_64 OS on it, including Windows, FreeBSD, MacOS X (yes, it's true) and so on.
    • Light hypervisor (it's a kernel module) - so no more lost memory to Dom0 and the Hypervisor
    • All memory is available to allocate for VPS servers
    • Networking can be set up and configured as easy as pie.
    • Instead of allocating real CPU cores, it allocates threads. You can allocate up to 10 threads per physical core. They are also called vCPUs. It doesn't really help with the granular control do.
    • Emulation is as close to a real physical server as possible.
    • Now there is template support in SolusVM for KVM, yay!

    KVM cons:

    • Pretty bad I/O performance on regular HDDs, OK performance on RAID-10, and it will do much better on SSD. This is mostly due to how KVM tries to emulate a real server, it will only read and write within the area that the partition was created for it. WriteBack cache may help a little, but not much. I haven't tried this yet, but bcache might work to provide SSD caching for KVM.
    • There is no granular control over the CPU, so you should never assign the entire CPU to any single VPS.
    • Maybe this has changed with SolusVM 1.12, but in the past it was nearly impossible to downsize a KVM partition.

    If I left out anything, please don't be to hard on me :)

    Customer will pick what ever is easier to use and provides the best performance for them. If you can offer easy to use and fast KVM VPS server, then more power to you, that way you can advertise FreeBSD and Windows as well.

Sign In or Register to comment.