All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Best platform for FreeBSD
Hello LET'ers,
I am looking to see how others are running their FreeBSD VPS boxes.
At the moment, on a GENERIC (as well as a hand crafted) kernel, I am only able to achieve 25MB/s to 30MB/s.
I doubt it is the provider (who have been great, and where I was achieving 150MB/s+ on a lower spec'ed OpenVZ Debian machine), leading me to think it is a platform or configuration issue.
Is anyone else running FreeBSD in XEN-HVM? What configuration did you use to improve the diskio?
What about XEN-PV or KVM... How are your benchmarks there?
Slow dd (3 identical runs):
"$ dd if=/dev/zero of=test bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes transferred in 32.913429 secs (32623214 bytes/sec)"
"$ dd if=/dev/zero of=test bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes transferred in 29.392448 secs (36531215 bytes/sec)"
"$ dd if=/dev/zero of=test bs=64k count=16k
16384+0 records in
16384+0 records out
1073741824 bytes transferred in 38.417705 secs (27949140 bytes/sec)"
Fast ioping
"# ioping -R /dev/ada0p2
--- /dev/ada0p2 (device 9.5 Gb) ioping statistics ---
310 requests completed in 3005.8 ms, 103 iops, 0.4 mb/s
min/avg/max/mdev = 1.7/9.7/89.9/8.1 ms"
"# ioping -RL /dev/ada0p2
--- /dev/ada0p2 (device 9.5 Gb) ioping statistics ---
72 requests completed in 3001.8 ms, 24 iops, 6.0 mb/s
min/avg/max/mdev = 2.6/41.7/734.5/134.9 ms"
"# ioping -c 10 .
4096 bytes from . (ufs /dev/ada0p2): request=1 time=0.8 ms
4096 bytes from . (ufs /dev/ada0p2): request=2 time=0.9 ms
4096 bytes from . (ufs /dev/ada0p2): request=3 time=0.8 ms
4096 bytes from . (ufs /dev/ada0p2): request=4 time=2.5 ms
4096 bytes from . (ufs /dev/ada0p2): request=5 time=0.7 ms
4096 bytes from . (ufs /dev/ada0p2): request=6 time=1.1 ms
4096 bytes from . (ufs /dev/ada0p2): request=7 time=0.7 ms
4096 bytes from . (ufs /dev/ada0p2): request=8 time=1.2 ms
4096 bytes from . (ufs /dev/ada0p2): request=9 time=0.9 ms
4096 bytes from . (ufs /dev/ada0p2): request=10 time=0.9 ms
--- . (ufs /dev/ada0p2) ioping statistics ---
10 requests completed in 9087.2 ms, 945 iops, 3.7 mb/s
min/avg/max/mdev = 0.7/1.1/2.5/0.5 ms"
May be useful:
"# uname -a
FreeBSD ##### 10.0-CURRENT FreeBSD 10.0-CURRENT #1: Sat May 5 21:55:06 EST 2012 BluBoy@#####:/usr/obj/usr/src/sys/##### i386"
"# top -d1 | grep Free
Mem: 112M Active, 25M Inact, 24M Wired, 4768K Cache, 34M Buf, 76M Free
Swap: 512M Total, 47M Used, 464M Free, 9% Inuse"
" $ sysctl hw.model
hw.model: Intel(R) Xeon(R) CPU L5640 @ 2.27GHz"
"$ kldstat
Id Refs Address Size Name
1 15 0xc0400000 79f1b4 kernel
2 5 0xc0ba0000 428c virtio.ko
3 1 0xc0ba5000 557c virtio_pci.ko
4 1 0xc0bab000 50b8 virtio_blk.ko
5 1 0xc0bb1000 a264 if_vtnet.ko
6 1 0xc0bbc000 386c virtio_balloon.ko
7 1 0xc2786000 5000 ums.ko"
Comments
You should run it in KVM because FreeBSD 9+ has virtio drivers built in, I don't think there's any comparable drivers for Xen-HVM with FreeBSD.
HVM is not very nice. KVM + VirtIO is so much better.
Also, on BSD I usually always install the GNU-Utilities, you get the GNU version of dd which support fdatasync.
KVM with FreeBSD is nice if you install virtio.
Host Node
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.10736 s, 346 MB/s
FreeBSD VPS
gdd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 9.92037 s, 108 MB/s
Hmm, it is still a massive drop.
When the choice is between a dedicated server ($$$$) and a VPS ($) will probably have to make do. FreeBSD jails would be the alternative but nothing beats full control over the server. I'm only curious how would the speeds be if there are active running VPSes, since my KVM node is completely empty.
Xen-HVM is not that good. Go for true virtualization - KVM.
Xen-HVM is true virtualization?
No the KVM.
How do you figure when both use qemu to provider the visualization layer
@mitgib: I use MilkDrop for MY visualization
The problem may be that the paravirtualized drivers which optimize performance are built into the FreeBSD kernel for KVM but require the compilation of a custom kernel for HVM.
I was referring to the fact that qhoster said HVM wasn't full visualization, when it is.
You cought the fever too, or am I missing something?
Those benchmarks were WITH a custom kernel which did slightly improve my speeds! I was hoping to get atleast 50 - 100Mbps though!
Anything else I can do? Or do I just write this LEB off and look for a KVM solution?
Are you on a XEN-HVM or KVM??? because:
Those look like modules for KVM. If you have those loaded on an HVM, no wonder they're not doing much good!
Edit: Xen modules are typically called
blkfront|netfront|xen-platform-pci
or for older versions,xennet|xenblk
Your best bet.
OS X auto-correct.
Go on iOS or OS X and type virtualization, it corrects it