All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Optimize KVM performance
Was trying find ways to optimize KVM installs Centos 6.3 or Ubuntu 12.04 and could not find any good information except:
1) modify /boot/grub/menu.lst or /boot/grub/grub.cfg to allow VPS IO scheduler to noop
2) use Virtio Drivers
3) fix Swap Partition if its 0
free -m
swapoff /dev/vda2
mkswap /dev/vda2
swapon /dev/vda2
4) improve I/O performance
echo 0 > /sys/block/vda/queue/rotational
echo 0 > /sys/block/vda/queue/rq_affinity
echo noop > /sys/block/vda/queue/scheduler
echo "echo 0 > /sys/block/vda/queue/rotational" >> /etc/rc.local
echo "echo 0 > /sys/block/vda/queue/rq_affinity" >> /etc/rc.local
echo "echo noop > /sys/block/vda/queue/scheduler" >> /etc/rc.local
echo 'vm.swappiness=5' >> /etc/sysctl.conf
echo 'vm.vfs_cache_pressure=50' >> /etc/sysctl.conf
sysctl -p
5) mount partition with noatime.
vi /etc/fstab
any other links, tips or ideas?
Ramnode seems to have a well done guide
https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=44
Comments
there's not much you can do to optimize KVM performance, what issues do you have right now?
Move to a faster node
virtio driver instead of the legacy IDE made a noticable difference at least on Centos.
yeah, I meant except the methods you already described
Use ext4 instead of ext3 when installing.
The kernel could also be recompiled, my choices are
Not compiling for size
Preemption model is No Forced Preemption (Server)
Processor family - depends on cpu used
Reduce Maximum number of CPUs
None of that instructions is oriented to SSDs except this
echo 0 > /sys/block/vda/queue/rotational
The nobarrier option seems to be suggested because they use BBU raid controllers.
The noatime isn't really recommended, apparently "relatime" is a better choice.
The noop scheduler is the recommended for KVM too.
Why this?
noop is good fo kvm
+1
If you're having any issues on a RamNode VPS, don't hesitate to open a ticket.
Otherwise, carry on! Perhaps I'll learn a thing or two in this thread as well.
Adding the following lines to your
/etc/sysctl.conf
file will increase network throughputYou can also try the following
/etc/fstab
options on your ext4 mountI don't like the noatime one =/
Also, I am afraid of the cache one.
But save all settings in case you need rollback ...
I've found that double writeback is not helpful. Just let the hardware handle writeback in my opinion.
So, at the end, the "perfect" settings depend of the provider :P
I believe this isn't a provider specific topic, so there's always a chance someone's provider not using writeback.
Sure, sorry.
Anyway, have you guys seen a KVM guest become very slow after applying tweaks like the ones above?
Does higher count of threads or CPUs have any predictable effect or its worse as in the case of VMWare?
:O What you mean?
VMWare does not behave well when you assign multiple vCores at multtiple VMs. You actually pay a penalty in that case. VMs dispatches vcores using a time slicing algorithm (fair approach), not an interrupt driven algorithm as is typically used to dispatch processes in a regular operating system. What this means in practice is that if you assign say 2 vcores to a guest, then 2 vcores must be AVAILABE AT SAME TIME before that guest can run at all.
I was just wondering how to offer increased benchmarks in certain customers they need it and willing to pay using KVM VM platform?
Wow, that sounds a little bit dumb
No its not actually, Quite the opposite. Its a complicated subjects and statistical in nature ...
@goexodus You get more performance with more vCores. Take a look at serverbears benchmarks on the unix bench results, if you have more vCores you get more points. It works very well with linux kvm.
You can pin each vCore to a specific core if you want to get the the best results.
Which sounds obvious lol
I guess so. VMware stuff has years and years of development
But if you check their licencing you will get a heart attach ....