Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Recommend OS for KVM Host Node
New on LowEndTalk? Please Register and read our Community Rules.

Recommend OS for KVM Host Node

sleddogsleddog Member
edited October 2011 in General

I asked this on WHT and thought I'd try here also... :)

What would you recommend as the OS for a KVM host node? CentOS 6? Something else?

This is for a LAN box, to be used for various storage & development tasks. Not public or commercial usage.

«1

Comments

  • Archlinux, I think it's a good choice for development.

  • Ubuntu works wonders

    Preetam @ Bitcable

  • sleddog said: What would you recommend as the OS for a KVM host node?

    What ever you are most comfortable with.

    Thanked by 3drmike Infinity jamson
    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • JustinBoshoffJustinBoshoff Member
    edited October 2011

    Hi Sleddog

    For a corp solution http://www.proxmox.com/products/proxmox-ve.
    Easy to install, very versatile, can do openvz and kvm on the same box, the storage options are great and live migration as well.

    YourVZ.com

  • fanovpnfanovpn Member
    edited October 2011

    miTgiB said: What ever you are most comfortable with.

    This.

    I use ArchLinux, which has the advantage of running the most recent kernel and qemu-kvm release, so I get all the latest (but usually undocumented) KVM features. But really it's just that I'm most comfortable with Arch.

    The only thing I'd recommend looking into when checking if your favourite distribution is suitable is if Libvirt will be easy to install (like whether it's provided in the distribution's package repositories). A lot of features and workarounds for qemu-kvm aren't documented anywhere except for libvirt's source code, so libvirt is almost necessary when running qemu-kvm if you don't want to do tons of research and code-diving. On the other hand, it means writing XML which might tip the balance back toward the research and code-diving :)

    But like miTgiB said, it all comes down to what you're most comfortable with because a distribution you know how to configure will work better than one with the latest features that you can't use or can't keep up-to-date and stable.

  • Thanks guys, very helpful. With these suggestions I'm down to either CentOS 6 or Debian 6. I'll probably give CentOS a whirl first and see how it goes.

  • Hmmm... I see CentOS 6.1 is still not released. Perhaps I'll try Debian :)

  • sleddog said: CentOS 6.1 is still not released.

    Scientific Linux 6.1 has been released though. But honestly, I have CentOS 6.0 KVM nodes that have been up since the week CentOS 6.0 was released. If I wasn't tied to SolusVM, I would probably use Debian 6 even though I am much more comfortable in CentOS. I have a Debian KVM node at home which has given me zero problems in the year it's been running.

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • Debian 6 forces use of lilo instead of grub for software raid. I'm not thrilled with that. And software raid is the only option budget-wise at the moment (I've used it for years so raid-management isn't an issue).

    And the report from some providers (not you, miTgiB) of CentOS 6 issues is also bothersome.

    So mulling it over, I think maybe CentOS 5.x. When it EOLs in March 2014 it'll be time for new hardware anyway. The stuff I'm using is already ~5 years old. Good time for a new build and hopefully the budget will be brighter :)

  • fanovpn said: I use ArchLinux, which has the advantage of running the most recent kernel and qemu-kvm release, so I get all the latest (but usually undocumented) KVM features. But really it's just that I'm most comfortable with Arch.

    +1. I use Arch everywhere.

  • If you need to use software raid.
    http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny
    So you will install Lenny with the raid you need and then add the pve repo to install proxmox.

    Thanked by 1Kairus

    YourVZ.com

  • kiloservekiloserve Member
    edited October 2011

    sleddog said: What would you recommend as the OS for a KVM host node?

    CentOS6 or CentOS5 + KVM + Good RAID setup = Pure Ownage on customized kernels or non standard Linux dom's (Windows,BSD,Arch) with vt-d enabled modern processors.

    With a good RAID setup, you can install Windows or CentOS on a KVM VPS in less than 10 minutes and 5 minutes respectively.

    If you are using basic Linux guests, XenPV with pygrub will probably still give you the best performance + standard linux flavor compatibility.

  • JustinBoshoff said: If you need to use software raid. http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny So you will install Lenny with the raid you need and then add the pve repo to install proxmox.

    Thanks for that. Though I'm not sure about installing Lenny at this time. Updates might cease in a year? http://wiki.debian.org/DebianLenny

    And thanks Kiloserve, though Windows is not a factor :)

  • Hi Sleddog

    The Lenny install will be rock solid as I am sure you know "Debian Stable" is.
    With PVE 2.0 on the horizon there will be an straight repo based Squeeze upgrade soon.

    Give it a kick, I can promise you, you won't be disappointed.
    KSM works like a charm.

    Just my 2 cents.

    YourVZ.com

  • Well I'm up and running with CentOS 5.7 & KVM. Took a while to figure out the network bridge setup but it's working now. Successfully created a VM using virt-install and installed Debian 6. It can be accessed directly on the LAN and has internet access. Lots to learn... :)

  • Good to hear.
    Have fun.

    YourVZ.com

  • FranciscoFrancisco Top Provider

    sleddog said: Thanks for that. Though I'm not sure about installing Lenny at this time. Updates might cease in a year? http://wiki.debian.org/DebianLenny

    And thanks Kiloserve, though Windows is not a factor :)

    The lenny install works just fine with squeeze :)

    Francisco

    BuyVM - Free DirectAdmin, Softaculous, & Blesta! / Anycast Support! / Windows 2008, 2012, & 2016! / Unmetered Bandwidth!
    BuyShared - Shared & Reseller Hosting / cPanel + Softaculous + CloudLinux / Pure SSD! / Free Dedicated IP Address
  • Only real hiccup had nothing to do with KVM -- there's a new bug in CentOS 5.7 netinstall whereby the grub bootloader is not setup when using software RAID 1... rescue mode, install it manually works :)

  • Francisco said: The lenny install works just fine with squeeze :)

    Good to know... maybe I'll start over :)

  • sleddogsleddog Member
    edited October 2011

    Can't get virtio working. I built a new vm like this:

    virt-install --name ve3 --ram 128
    --disk path=/var/lib/libvirt/images/ve3,size=5,bus=virtio
    --network bridge:br0 --os-variant virtio26 --vnc
    --cdrom=/var/lib/libvirt/isos/debian-6.0.3-i386-netinst.iso

    Then connected via VNC and successfully completed the installation of Debian 6.

    Bu then when the VM is restarted all I get is:

    Booting from Hard Disk...
    Boot failed: could not read the boot disk

    FAILED: No bootable device.

    Any ideas?

  • Hi Sleddog

    Use IDE for iso's.

    YourVZ.com

  • Not sure I follow you... the Deb installation used virtio and installed to /dev/vda1

    But after restarting the guest it can't see that boot device.

    I can edit the config and change to 'ide' and then it boots, but it isn't using virtio then.

  • Sorry I missed the bit about it finishing the install.
    I thought the iso was not booting and going straight to the disk on trying to install.
    Bit brain dead on this side.
    I have found that at times if I created the disk as ide or scsi and try to use it as a virtio disk it can't be read. Not sure if this applies to you tough.

    YourVZ.com

  • Got it to boot with virtio from /dev/vda1 by editing the config:

    Change:

    < source file='/var/lib/libvirt/images/ve3.img'/ >

    To:

    < source file='/var/lib/libvirt/images/ve3.img,if=virtio,boot=on'/ >

    But unfortunately there's a problem...:

    Without virtio, in a vm:

    [email protected]:~# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 61.7273 s, 17.4 MB/s

    With virtio, in the new, now-bootable vm:

    [email protected]:~# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    ^C12795+0 records in
    12795+0 records out
    209633280 bytes (210 MB) copied, 319.432 s, 656 kB/s

    As you can see I eventually Ctr-C'ed it to cancel.

  • fanovpnfanovpn Member
    edited October 2011

    @sleddog: It sure sounds a lot like https://bugzilla.redhat.com/show_bug.cgi?id=514899 but I can't imagine your qemu-kvm release is that old.

    Edit: Never mind, sounds like you got it to boot.

  • Yes it boots, but disk performance with or without virtio is at best 4 or 5 time slower than the 'bare metal' host node.

    I'm about ready to give OpenVZ a try....

  • FranciscoFrancisco Top Provider

    Is your .img a full sized file or is it a cow2 file?

    Francisco

    BuyVM - Free DirectAdmin, Softaculous, & Blesta! / Anycast Support! / Windows 2008, 2012, & 2016! / Unmetered Bandwidth!
    BuyShared - Shared & Reseller Hosting / cPanel + Softaculous + CloudLinux / Pure SSD! / Free Dedicated IP Address
  • miTgiBmiTgiB Member
    edited October 2011

    Some other things to try as well if you are not using hardware raid

    In VM echo noop > /sys/block/vda/queue/scheduler

    On HN echo deadline > /sys/block/sda/queue/scheduler

    I first came across this http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaat/liaatbpscheduleroverview.htm

    But some discussion resulted in noop and deadline instead of straight noop. I have these in /etc/rc.local since they are not persistent through restarts.

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • @miTgiB

    [[email protected] ~]# echo noop > /sys/block/vda/queue/scheduler
    [[email protected] ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 19.6011 seconds, 54.8 MB/s
    [[email protected] ~]# echo deadline > /sys/block/vda/queue/scheduler
    [[email protected] ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 6.47083 seconds, 166 MB/s
    FreeVPS.us - The oldest post to host VPS provider
  • dmmcintyre3 said: 1073741824 bytes (1.1 GB) copied, 6.47083 seconds, 166 MB/s

    That is very interesting indeed, since the HN has a hardware raid card, and checking further

    [[email protected] ~]# cat /sys/block/sda/queue/scheduler

    noop anticipatory deadline [cfq]

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • Francisco said: Is your .img a full sized file or is it a cow2 file?

    It's a raw / full size image.

    A few changes to the virtio VM got it performing at least as well as the IDE VM.

    Changes to /sys/block/vda/queue/scheduler in the VM and on the host had absolutely no affect on performance.

    On the host:

    [[email protected]:~] dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 4.2178 seconds, 63.6 MB/s

    In the virtio VM:

    [email protected]:~# dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 15.5236 s, 17.3 MB/s
    
    [email protected]:~# ioping -c 10 .
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=1 time=0.9 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=2 time=0.9 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=3 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=4 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=5 time=0.7 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=6 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=7 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=8 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=9 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=10 time=0.8 ms
    
    --- . (ext3 /dev/disk/by-uuid/cd73...) ioping statistics ---
    10 requests completed in 9025.1 ms, 1274 iops, 5.0 mb/s
    min/avg/max/mdev = 0.7/0.8/0.9/0.0 ms
  • miTgiB said: That is very interesting indeed, since the HN has a hardware raid card,

    Playing around a bit on a new empty system, messing with the HN with HW raid has no real effect

    [[email protected] home]# cat /sys/block/sda/queue/scheduler
    noop anticipatory deadline [cfq]

    [[email protected] home]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync

    65536+0 records in

    65536+0 records out

    1073741824 bytes (1.1 GB) copied, 9.53423 s, 113 MB/s

    [[email protected] home]# echo deadline > /sys/block/sda/queue/scheduler

    [[email protected] home]# dd if=/dev/zero of=test2 bs=16k count=64k conv=fdatasync

    65536+0 records in

    65536+0 records out

    1073741824 bytes (1.1 GB) copied, 9.82995 s, 109 MB/s

    [[email protected] home]# echo noop > /sys/block/sda/queue/scheduler

    [[email protected] home]# dd if=/dev/zero of=test3 bs=16k count=64k conv=fdatasync

    65536+0 records in

    65536+0 records out

    1073741824 bytes (1.1 GB) copied, 9.42552 s, 114 MB/s

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • What disk config are you running as the iops look good?

    YourVZ.com

  • WD RE2 500GB SATA, 2 disks in RAID 1.

    [[email protected]:~] cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sdb1[1] sda1[0]
          104320 blocks [2/2] [UU]
    
    md1 : active raid1 sdb2[1] sda2[0]
          1052160 blocks [2/2] [UU]
    
    md2 : active raid1 sdb3[1] sda3[0]
          487227264 blocks [2/2] [UU]
    
    [[email protected]:~] mount
    /dev/md2 on / type ext3 (rw,noatime)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    /dev/md0 on /boot type ext3 (rw)
    tmpfs on /dev/shm type tmpfs (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
  • JustinBoshoffJustinBoshoff Member
    edited October 2011

    What controller are you running as the iops and disk config don't match up?
    Are you running software"Sorry Software raid"?
    Have you updated the firmware on the controller?

    YourVZ.com

  • Software raid. The ioping was inside a VM...

  • is /dev/md1 swap? It is not needed to raid swap as the kernel will do that natively. You'll pick up a few cpu cycles, probably nothing noticeable, but savings are savings.

    If you are still in a testing type phase, you might look at creating a PV and then assigning each VM an LV to add the ability to create snapshots for easy backups and you can gain a bit of performance as well over disk images.

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • sleddogsleddog Member
    edited October 2011

    miTgiB said: is /dev/md1 swap? It is not needed to raid swap as the kernel will do that natively. You'll pick up a few cpu cycles, probably nothing noticeable, but savings are savings.

    Yup. Do you mean create a swap partition on each drive and swapon each? Grub is installed on each drive, so the system will boot in case one drive dies.

    I wondered about this at setup, but wasn't too concerned about it. With the amount of memory and the known use it will get, swapping is minimal -- a few 100's of kbs.

    Yes I'm still is testing but I need to wrap this up soon :) I'm beginning to think that the underlying issue may be hardware -- the MB and/or CPU. Yes they support HVM, but that doesn't mean everything.

    I'm thinking now I might run disk-intensive tasks -- LAN file access from a group of Windows machines -- directly on the host, and use VMs for dev/testing things, where sequential write speed really doesn't matter.

    While I'm not getting great sequential write speeds in the VMs, for my real-world use it's perfectly fine. More important for me, the VMs have proven to be completely stable over a few days of use.

  • sleddog said: Do you mean create a swap partition on each drive and swapon each?

    Yes, exactly! The kernel will make use of both while both are reachable, in a raid0 fashion, or if one drive fails, it can deal with that as well.

    The nice thing for me with KVM is you can have your system setup how you want, and add a simple kernel module for KVM support to add VM's as needed, which I've done at home. One day the power bill came and while not any more or less then normal, just shocked me to move my Windows Home Server, that I was really only using for backup of my client machines in my apartment, into a KVM VM on my Debian fileserver, it's a huge beast with faster then average raid60 setup and 64tb raw drive space (44tb formatted xfs) and adding in this 1 KVM VM was painless and had zero effect on my NAS.

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • miTgiB said: Yes, exactly! The kernel will make use of both while both are reachable, in a raid0 fashion, or if one drive fails, it can deal with that as well.

    Thanks for sharing that. Now I know.

    When I put this box into production next week it'll replace an existing one with similar specs. Then I'll use that box for further KVM testing (at my leisure).

    Your advice is much appreciated. As is the advice of everyone here :)

  • kiloservekiloserve Member
    edited October 2011

    miTgiB said: Yes, exactly! The kernel will make use of both while both are reachable, in a raid0 fashion, or if one drive fails, it can deal with that as well.

    That's an excellent suggestion for swap, btw. One of the best features of using KVM virt is the shared swap off the host node.

    With your 2 individual drive setup, it's like having RAID0 speed while still having RAID1 durability (it's not a deal-breaker if 1 drive goes down).

  • The problem with this config on a kvm host is when the ram is under pressure it will write to both swap spaces if the swap priority is the same, so in essence you will have live ram from your vm's stripped across the swap spaces and if one fails it will be like a dimm going faulty.

    YourVZ.com

  • kiloservekiloserve Member
    edited October 2011

    JustinBoshoff said: The problem with this config on a kvm host is when the ram is under pressure it will write to both swap spaces if the swap priority is the same, so in essence you will have live ram from your vm's stripped across the swap spaces and if one fails it will be like a dimm going faulty.

    This would also be true if you only had 1 swap disk rather than 2. With 2, at least you can reboot your VPS and continue on your way until you have time to replace it.

  • JustinBoshoffJustinBoshoff Member
    edited October 2011

    Sorry If it's 1 mirrored swap?

    YourVZ.com

  • JustinBoshoff said: Sorry If it's 1 mirrored swap?

    My suggestion was to get rid of the mirrored swap to avoid your scenario which is currently possible

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • I must be missing something here?
    So if you have a mirrored swap and one of the drives in the mirror fails your vm's will fail??
    Am I getting this?

    YourVZ.com

  • I think we are both confusing ourselves :)

    VM's won't fail in a mirrored swap

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • Ok Cool
    Sorry if I came across a bit miff, had very little sleep the last week.
    KVM Rocks!

    YourVZ.com

  • I stopped obsessing about disk write speed :) and moved on to setting up some application environments.

    VM1 is for web development, Debian 6 with nginx, php-fpm, mysql, etc. Works a treat, performance is absolutely fine. The 'dd' test numbers just don't matter for this kind of application.

    VM2 is DNS for the building I'm in (10-20 office users). Basically a caching nameserver + some custom entries -- e.g., example.com points to the dev webserver when we're in the midst of developing a new website for example.com. I was doing this with bind, which was a PITA. Today I found dnsmasq. Joy. I just add "dev.server.ip.address example.com" to /etc/hosts in that VM and dnsmasq servs it to the lan.

    And while I'm in there, I can add "dev.server.ip.address facebook.com" and see who's the first to remark on the new facebook website.

  • Postscript:

    While I said that "performance is absolutely fine", once I started using it for more intensive PHP/MySQL apps (like phpMyAdmin) it wasn't. In phpMyAdmin (for example) I'd see 2-3 second delays on every page load. A bash script that doesn't hit the disk and usually would run in less than 1 second would take 4-5 seconds.

    I read endless websites relating to KVM, looking for answers to this poor performance, and finally gave up.

    I set up an identical box (identical in terms of hardware & software RAID), installed CentOS 5 and OpenVZ, and move everything there. Blazing fast, I'm happy :) Container-based virtualization provides all the separation I need at this time. I'll revisit KVM another day...

Sign In or Register to comment.