Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Recommend OS for KVM Host Node - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Recommend OS for KVM Host Node

2»

Comments

  • Francisco said: Is your .img a full sized file or is it a cow2 file?

    It's a raw / full size image.

    A few changes to the virtio VM got it performing at least as well as the IDE VM.

    Changes to /sys/block/vda/queue/scheduler in the VM and on the host had absolutely no affect on performance.

    On the host:

    [root@node1:~] dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 4.2178 seconds, 63.6 MB/s

    In the virtio VM:

    root@ve3:~# dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 15.5236 s, 17.3 MB/s
    
    root@ve3:~# ioping -c 10 .
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=1 time=0.9 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=2 time=0.9 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=3 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=4 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=5 time=0.7 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=6 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=7 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=8 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=9 time=0.8 ms
    4096 bytes from . (ext3 /dev/disk/by-uuid/cd73...): request=10 time=0.8 ms
    
    --- . (ext3 /dev/disk/by-uuid/cd73...) ioping statistics ---
    10 requests completed in 9025.1 ms, 1274 iops, 5.0 mb/s
    min/avg/max/mdev = 0.7/0.8/0.9/0.0 ms
  • miTgiB said: That is very interesting indeed, since the HN has a hardware raid card,

    Playing around a bit on a new empty system, messing with the HN with HW raid has no real effect

    [root@rickhz home]# cat /sys/block/sda/queue/scheduler
    noop anticipatory deadline [cfq]

    [root@rickhz home]# dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync

    65536+0 records in

    65536+0 records out

    1073741824 bytes (1.1 GB) copied, 9.53423 s, 113 MB/s

    [root@rickhz home]# echo deadline > /sys/block/sda/queue/scheduler

    [root@rickhz home]# dd if=/dev/zero of=test2 bs=16k count=64k conv=fdatasync

    65536+0 records in

    65536+0 records out

    1073741824 bytes (1.1 GB) copied, 9.82995 s, 109 MB/s

    [root@rickhz home]# echo noop > /sys/block/sda/queue/scheduler

    [root@rickhz home]# dd if=/dev/zero of=test3 bs=16k count=64k conv=fdatasync

    65536+0 records in

    65536+0 records out

    1073741824 bytes (1.1 GB) copied, 9.42552 s, 114 MB/s

  • What disk config are you running as the iops look good?

  • WD RE2 500GB SATA, 2 disks in RAID 1.

    [root@node1:~] cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sdb1[1] sda1[0]
          104320 blocks [2/2] [UU]
    
    md1 : active raid1 sdb2[1] sda2[0]
          1052160 blocks [2/2] [UU]
    
    md2 : active raid1 sdb3[1] sda3[0]
          487227264 blocks [2/2] [UU]
    
    [root@node1:~] mount
    /dev/md2 on / type ext3 (rw,noatime)
    proc on /proc type proc (rw)
    sysfs on /sys type sysfs (rw)
    devpts on /dev/pts type devpts (rw,gid=5,mode=620)
    /dev/md0 on /boot type ext3 (rw)
    tmpfs on /dev/shm type tmpfs (rw)
    none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
  • JustinBoshoffJustinBoshoff Member
    edited October 2011

    What controller are you running as the iops and disk config don't match up?
    Are you running software"Sorry Software raid"?
    Have you updated the firmware on the controller?

  • Software raid. The ioping was inside a VM...

  • is /dev/md1 swap? It is not needed to raid swap as the kernel will do that natively. You'll pick up a few cpu cycles, probably nothing noticeable, but savings are savings.

    If you are still in a testing type phase, you might look at creating a PV and then assigning each VM an LV to add the ability to create snapshots for easy backups and you can gain a bit of performance as well over disk images.

  • sleddogsleddog Member
    edited October 2011

    miTgiB said: is /dev/md1 swap? It is not needed to raid swap as the kernel will do that natively. You'll pick up a few cpu cycles, probably nothing noticeable, but savings are savings.

    Yup. Do you mean create a swap partition on each drive and swapon each? Grub is installed on each drive, so the system will boot in case one drive dies.

    I wondered about this at setup, but wasn't too concerned about it. With the amount of memory and the known use it will get, swapping is minimal -- a few 100's of kbs.

    Yes I'm still is testing but I need to wrap this up soon :) I'm beginning to think that the underlying issue may be hardware -- the MB and/or CPU. Yes they support HVM, but that doesn't mean everything.

    I'm thinking now I might run disk-intensive tasks -- LAN file access from a group of Windows machines -- directly on the host, and use VMs for dev/testing things, where sequential write speed really doesn't matter.

    While I'm not getting great sequential write speeds in the VMs, for my real-world use it's perfectly fine. More important for me, the VMs have proven to be completely stable over a few days of use.

  • sleddog said: Do you mean create a swap partition on each drive and swapon each?

    Yes, exactly! The kernel will make use of both while both are reachable, in a raid0 fashion, or if one drive fails, it can deal with that as well.

    The nice thing for me with KVM is you can have your system setup how you want, and add a simple kernel module for KVM support to add VM's as needed, which I've done at home. One day the power bill came and while not any more or less then normal, just shocked me to move my Windows Home Server, that I was really only using for backup of my client machines in my apartment, into a KVM VM on my Debian fileserver, it's a huge beast with faster then average raid60 setup and 64tb raw drive space (44tb formatted xfs) and adding in this 1 KVM VM was painless and had zero effect on my NAS.

  • miTgiB said: Yes, exactly! The kernel will make use of both while both are reachable, in a raid0 fashion, or if one drive fails, it can deal with that as well.

    Thanks for sharing that. Now I know.

    When I put this box into production next week it'll replace an existing one with similar specs. Then I'll use that box for further KVM testing (at my leisure).

    Your advice is much appreciated. As is the advice of everyone here :)

  • kiloservekiloserve Member
    edited October 2011

    miTgiB said: Yes, exactly! The kernel will make use of both while both are reachable, in a raid0 fashion, or if one drive fails, it can deal with that as well.

    That's an excellent suggestion for swap, btw. One of the best features of using KVM virt is the shared swap off the host node.

    With your 2 individual drive setup, it's like having RAID0 speed while still having RAID1 durability (it's not a deal-breaker if 1 drive goes down).

  • The problem with this config on a kvm host is when the ram is under pressure it will write to both swap spaces if the swap priority is the same, so in essence you will have live ram from your vm's stripped across the swap spaces and if one fails it will be like a dimm going faulty.

  • kiloservekiloserve Member
    edited October 2011

    JustinBoshoff said: The problem with this config on a kvm host is when the ram is under pressure it will write to both swap spaces if the swap priority is the same, so in essence you will have live ram from your vm's stripped across the swap spaces and if one fails it will be like a dimm going faulty.

    This would also be true if you only had 1 swap disk rather than 2. With 2, at least you can reboot your VPS and continue on your way until you have time to replace it.

  • JustinBoshoffJustinBoshoff Member
    edited October 2011

    Sorry If it's 1 mirrored swap?

  • JustinBoshoff said: Sorry If it's 1 mirrored swap?

    My suggestion was to get rid of the mirrored swap to avoid your scenario which is currently possible

  • I must be missing something here?
    So if you have a mirrored swap and one of the drives in the mirror fails your vm's will fail??
    Am I getting this?

  • I think we are both confusing ourselves :)

    VM's won't fail in a mirrored swap

  • Ok Cool
    Sorry if I came across a bit miff, had very little sleep the last week.
    KVM Rocks!

  • I stopped obsessing about disk write speed :) and moved on to setting up some application environments.

    VM1 is for web development, Debian 6 with nginx, php-fpm, mysql, etc. Works a treat, performance is absolutely fine. The 'dd' test numbers just don't matter for this kind of application.

    VM2 is DNS for the building I'm in (10-20 office users). Basically a caching nameserver + some custom entries -- e.g., example.com points to the dev webserver when we're in the midst of developing a new website for example.com. I was doing this with bind, which was a PITA. Today I found dnsmasq. Joy. I just add "dev.server.ip.address example.com" to /etc/hosts in that VM and dnsmasq servs it to the lan.

    And while I'm in there, I can add "dev.server.ip.address facebook.com" and see who's the first to remark on the new facebook website.

  • Postscript:

    While I said that "performance is absolutely fine", once I started using it for more intensive PHP/MySQL apps (like phpMyAdmin) it wasn't. In phpMyAdmin (for example) I'd see 2-3 second delays on every page load. A bash script that doesn't hit the disk and usually would run in less than 1 second would take 4-5 seconds.

    I read endless websites relating to KVM, looking for answers to this poor performance, and finally gave up.

    I set up an identical box (identical in terms of hardware & software RAID), installed CentOS 5 and OpenVZ, and move everything there. Blazing fast, I'm happy :) Container-based virtualization provides all the separation I need at this time. I'll revisit KVM another day...

  • kiloservekiloserve Member
    edited October 2011

    sleddog said: I read endless websites relating to KVM, looking for answers to this poor performance, and finally gave up.

    I can actually answer that one :)

    KVM Is full virt versus para-virtualization on OpenVZ. OpenVZ uses native disk access and basically does a jailed root to create the "VPS".

    KVM, on the other hand, emulates disk access using CPU time. The faster your CPU, the faster you can emulate a hard drive. Disk emulation is assisted by VT/VT-d (turn them on in your BIOS/CMOS).

    In your case, you are using software RAID, which uses CPU cycles already to emulate RAID. Then you added KVM disk emulation on top of that.

    Needless to say, you're going to get piss-poor disk I/O.

    KVM CAN be fast with the right hardware...but it has higher overhead than OVZ so you need better equipment to run KVM at comparable speeds to OVZ. In my testing, with good Hardware RAID, you can get 500 MB/s speeds.

    1) You need high ghz CPU's 
    2) You need VT/VT-d
    3) You need hardware RAID, or drop RAID altogether if you don't have hardware RAID
    4) KVM Virtio driver's will add 10-15% more speed
    

    OpenVZ really is the fastest virtualization technology because it has a negligible overhead. KVM's overhead is much higher than VZ and XenPV.

    Personally, I'd stick with VZ or do XenPV in your case.

    XenPV's advantage over VZ is that you'll have better resource granularity and process separation as VZ tends to bleed through at times.

  • Thanks for that, kiloserve. I'd pretty much concluded that it was KVM overhead on my hardware + software raid that was killing me. Though I was left puzzled when scripts that did no disk IO ran 3-4 times slower.

    Everything is tweaked and tuned and running fine now on OpenVZ. So I think I'll just leave well enough alone -- at least for a while :)

Sign In or Register to comment.