Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Recommend OS for KVM Host Node
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Recommend OS for KVM Host Node

sleddogsleddog Member
edited October 2011 in General

I asked this on WHT and thought I'd try here also... :)

What would you recommend as the OS for a KVM host node? CentOS 6? Something else?

This is for a LAN box, to be used for various storage & development tasks. Not public or commercial usage.

«1

Comments

  • Archlinux, I think it's a good choice for development.

  • Ubuntu works wonders

  • sleddog said: What would you recommend as the OS for a KVM host node?

    What ever you are most comfortable with.

    Thanked by 3drmike Infinity jamson
  • JustinBoshoffJustinBoshoff Member
    edited October 2011

    Hi Sleddog

    For a corp solution http://www.proxmox.com/products/proxmox-ve.
    Easy to install, very versatile, can do openvz and kvm on the same box, the storage options are great and live migration as well.

  • fanovpnfanovpn Member
    edited October 2011

    miTgiB said: What ever you are most comfortable with.

    This.

    I use ArchLinux, which has the advantage of running the most recent kernel and qemu-kvm release, so I get all the latest (but usually undocumented) KVM features. But really it's just that I'm most comfortable with Arch.

    The only thing I'd recommend looking into when checking if your favourite distribution is suitable is if Libvirt will be easy to install (like whether it's provided in the distribution's package repositories). A lot of features and workarounds for qemu-kvm aren't documented anywhere except for libvirt's source code, so libvirt is almost necessary when running qemu-kvm if you don't want to do tons of research and code-diving. On the other hand, it means writing XML which might tip the balance back toward the research and code-diving :)

    But like miTgiB said, it all comes down to what you're most comfortable with because a distribution you know how to configure will work better than one with the latest features that you can't use or can't keep up-to-date and stable.

  • Thanks guys, very helpful. With these suggestions I'm down to either CentOS 6 or Debian 6. I'll probably give CentOS a whirl first and see how it goes.

  • Hmmm... I see CentOS 6.1 is still not released. Perhaps I'll try Debian :)

  • sleddog said: CentOS 6.1 is still not released.

    Scientific Linux 6.1 has been released though. But honestly, I have CentOS 6.0 KVM nodes that have been up since the week CentOS 6.0 was released. If I wasn't tied to SolusVM, I would probably use Debian 6 even though I am much more comfortable in CentOS. I have a Debian KVM node at home which has given me zero problems in the year it's been running.

  • Debian 6 forces use of lilo instead of grub for software raid. I'm not thrilled with that. And software raid is the only option budget-wise at the moment (I've used it for years so raid-management isn't an issue).

    And the report from some providers (not you, miTgiB) of CentOS 6 issues is also bothersome.

    So mulling it over, I think maybe CentOS 5.x. When it EOLs in March 2014 it'll be time for new hardware anyway. The stuff I'm using is already ~5 years old. Good time for a new build and hopefully the budget will be brighter :)

  • fanovpn said: I use ArchLinux, which has the advantage of running the most recent kernel and qemu-kvm release, so I get all the latest (but usually undocumented) KVM features. But really it's just that I'm most comfortable with Arch.

    +1. I use Arch everywhere.

  • If you need to use software raid.
    http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny
    So you will install Lenny with the raid you need and then add the pve repo to install proxmox.

    Thanked by 1Kairus
  • kiloservekiloserve Member
    edited October 2011

    sleddog said: What would you recommend as the OS for a KVM host node?

    CentOS6 or CentOS5 + KVM + Good RAID setup = Pure Ownage on customized kernels or non standard Linux dom's (Windows,BSD,Arch) with vt-d enabled modern processors.

    With a good RAID setup, you can install Windows or CentOS on a KVM VPS in less than 10 minutes and 5 minutes respectively.

    If you are using basic Linux guests, XenPV with pygrub will probably still give you the best performance + standard linux flavor compatibility.

  • JustinBoshoff said: If you need to use software raid. http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Lenny So you will install Lenny with the raid you need and then add the pve repo to install proxmox.

    Thanks for that. Though I'm not sure about installing Lenny at this time. Updates might cease in a year? http://wiki.debian.org/DebianLenny

    And thanks Kiloserve, though Windows is not a factor :)

  • Hi Sleddog

    The Lenny install will be rock solid as I am sure you know "Debian Stable" is.
    With PVE 2.0 on the horizon there will be an straight repo based Squeeze upgrade soon.

    Give it a kick, I can promise you, you won't be disappointed.
    KSM works like a charm.

    Just my 2 cents.

  • Well I'm up and running with CentOS 5.7 & KVM. Took a while to figure out the network bridge setup but it's working now. Successfully created a VM using virt-install and installed Debian 6. It can be accessed directly on the LAN and has internet access. Lots to learn... :)

  • Good to hear.
    Have fun.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    sleddog said: Thanks for that. Though I'm not sure about installing Lenny at this time. Updates might cease in a year? http://wiki.debian.org/DebianLenny

    And thanks Kiloserve, though Windows is not a factor :)

    The lenny install works just fine with squeeze :)

    Francisco

  • Only real hiccup had nothing to do with KVM -- there's a new bug in CentOS 5.7 netinstall whereby the grub bootloader is not setup when using software RAID 1... rescue mode, install it manually works :)

  • Francisco said: The lenny install works just fine with squeeze :)

    Good to know... maybe I'll start over :)

  • sleddogsleddog Member
    edited October 2011

    Can't get virtio working. I built a new vm like this:

    virt-install --name ve3 --ram 128
    --disk path=/var/lib/libvirt/images/ve3,size=5,bus=virtio
    --network bridge:br0 --os-variant virtio26 --vnc
    --cdrom=/var/lib/libvirt/isos/debian-6.0.3-i386-netinst.iso

    Then connected via VNC and successfully completed the installation of Debian 6.

    Bu then when the VM is restarted all I get is:

    Booting from Hard Disk...
    Boot failed: could not read the boot disk

    FAILED: No bootable device.

    Any ideas?

  • Hi Sleddog

    Use IDE for iso's.

  • Not sure I follow you... the Deb installation used virtio and installed to /dev/vda1

    But after restarting the guest it can't see that boot device.

    I can edit the config and change to 'ide' and then it boots, but it isn't using virtio then.

  • Sorry I missed the bit about it finishing the install.
    I thought the iso was not booting and going straight to the disk on trying to install.
    Bit brain dead on this side.
    I have found that at times if I created the disk as ide or scsi and try to use it as a virtio disk it can't be read. Not sure if this applies to you tough.

  • Got it to boot with virtio from /dev/vda1 by editing the config:

    Change:

    < source file='/var/lib/libvirt/images/ve3.img'/ >

    To:

    < source file='/var/lib/libvirt/images/ve3.img,if=virtio,boot=on'/ >

    But unfortunately there's a problem...:

    Without virtio, in a vm:

    root@ve1:~# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 61.7273 s, 17.4 MB/s

    With virtio, in the new, now-bootable vm:

    root@ve3:~# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    ^C12795+0 records in
    12795+0 records out
    209633280 bytes (210 MB) copied, 319.432 s, 656 kB/s

    As you can see I eventually Ctr-C'ed it to cancel.

  • fanovpnfanovpn Member
    edited October 2011

    @sleddog: It sure sounds a lot like https://bugzilla.redhat.com/show_bug.cgi?id=514899 but I can't imagine your qemu-kvm release is that old.

    Edit: Never mind, sounds like you got it to boot.

  • Yes it boots, but disk performance with or without virtio is at best 4 or 5 time slower than the 'bare metal' host node.

    I'm about ready to give OpenVZ a try....

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Is your .img a full sized file or is it a cow2 file?

    Francisco

  • miTgiBmiTgiB Member
    edited October 2011

    Some other things to try as well if you are not using hardware raid

    In VM echo noop > /sys/block/vda/queue/scheduler

    On HN echo deadline > /sys/block/sda/queue/scheduler

    I first came across this http://publib.boulder.ibm.com/infocenter/lnxinfo/v3r0m0/index.jsp?topic=/liaat/liaatbpscheduleroverview.htm

    But some discussion resulted in noop and deadline instead of straight noop. I have these in /etc/rc.local since they are not persistent through restarts.

  • @miTgiB

    [root@tiger ~]# echo noop > /sys/block/vda/queue/scheduler
    [root@tiger ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 19.6011 seconds, 54.8 MB/s
    [root@tiger ~]# echo deadline > /sys/block/vda/queue/scheduler
    [root@tiger ~]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 6.47083 seconds, 166 MB/s
  • dmmcintyre3 said: 1073741824 bytes (1.1 GB) copied, 6.47083 seconds, 166 MB/s

    That is very interesting indeed, since the HN has a hardware raid card, and checking further

    [root@kvm02 ~]# cat /sys/block/sda/queue/scheduler

    noop anticipatory deadline [cfq]

Sign In or Register to comment.