All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Achieve 2.0 GB/s + I/O Speed With ZFS On Proxmox For Your Dedibox Limited 1215 Server
Can't say I've tested this extensively or know much about it in regards to reliability or accuracy of the results.. but I was so amazed at the dd test from just choosing ZFS in RAID 0 from the Proxmox install that I thought I would post it here..
Just a word of caution.. this is only for testing and probably has no real value in a live server environment and that playing around with RAID or Choosing the wrong RAID type may lead to drastic data loss.. So please!! only try this on a test server.
The Wiki:
http://pve.proxmox.com/wiki/Storage:_ZFS
http://en.wikipedia.org/wiki/ZFS
Specs:
Dedibox Limited 1215
Intel Xeon E3 1220 V2 @ 3.10 Ghz
16 GB RAM
2 x 450GB SAS 15K
HP Smart Array P410 Controller 256MB Cache with BBU
Procedure:
Well not much to do.. my settings are as follows:
In my online.net console my Hardware RAID is set to RAID 0
Proceed to do a bare-metal Install of Proxmox from iLO choosing ZFS RAID 0 as file type as oppose to the default ext3. You can choose this setting under the options button in the Proxmox installer.
Note:
In RAID 0 with ext3 I would usually achieve a dd of 230/MB/s to 290/MB/s
Sample Results:
ZFS RAID 0
dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 0.416202 s, 2.6 GB/s dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 0.449092 s, 2.4 GB/s dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 0.499102 s, 2.2 GB/s dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 0.475973 s, 2.3 GB/s dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 0.46622 s, 2.3 GB/s dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 0.426818 s, 2.5 GB/s dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest 16384+0 records in 16384+0 records out 1073741824 bytes (1.1 GB) copied, 0.439555 s, 2.4 GB/s
Comments
OpenVZ with ploop?
The test was done on the host Proxmox node so no virtualization.
Oh sorry I missed that, pretty interesting results then, how big is the cache on the raid card, just wondering what you would get with bs=128 count=32k ?
Great minds think alike :P
oh god sorry I need to learn to read and retain the information already posted, you said it was 256mb cache, sorry
very impressive then.
Results from an OVZ CT.. not sure what ploop is so probably not used.
Host Proxmox
From a OVZ CT
Seems not much difference from OVZ to non virtualized..
trying this on home pc. looks good ^
time to try it on KVM
(in vmware workstation)
That's because ovz is only a chroot I would imagine
I was thanking you for the impressive comment not the other.. would not want you to think I was being insulting...:p
ZFS compression I guess? Or something related to the write cache.
Not sure really.. I did enable write cache on both the drive and the raid card but it did not make a difference on ext3.. the ratio was set to 25 read n 75 write.. at least I think thats what it was..
For what its worth the dedi does feel it performs faster.. in winscp browsing folders is definely quicker.. maybe not ramnode quick but that could be cause this is in France vs Atlanta
Wonder how quick zfs would be on a software raid ram drive
please copy a 1 GB iso of something from disk to disk and get the time this will take, thanks ;-)
What you are asking of me seems like a lot of work and I dont have a another drive since its 2 drives in RAID 0.. wouldn't dd be the same thing? its 1gb copied
FWIW ZFS can be configured to have a lazy flushed write cache so 1GB write isn't proving anything (assuming free RAM >=1GB). Secondly, /dev/zero is eminently compressible - you're essentially benchmarking how quick the CPU is at compressing a giant stream of zeroes, rather than disk IO
Yeah I was thinking it must be something like that.. Hard to believe that simply using ZFS can give such an increase in i/o.
Update:
Maybe this is a more accurate test?
@earl does HP Smart Array 410 support JBOD? is it safe to put it RAID 0 and use ZFS?
@souvarine RAID 0 means not safe IMO
RAID0 of one drive is safe (you loose only the SMART pass through, but the controller monitors that) - The P410(i) has no HBA support (the mainboard has SATA ports most likely but these are usually not cabled and only "designed" for DVD drives), so this is the only solution anyway.
No its a RAID controller and should NOT be used with ZFS. You need a HBA not RAID controller for proper operation. It will work, but ZFS is designed to interrogate the disk for data rather than a RAID controller.
I don't believe it does.. but I've not access the RAID cards menu cause I don't think we are suppose to..
As @MarkTurner mentioned ZFS is suppose to have direct access to the drive for it to function correctly.. not with an intermediate like a RAID card between ZFS and the HD. so this method is definitely not recommend, just for testing.
Just to add the DD stats may be great for DD p0rn but ZFS does not necessarily improve the performance of the I/O
what I meant was please simply do a copy of a normal file/iso ~1GB size (just some real data) on your hdd/raid whatsoever from one folder to another and measure time taken.
first create a dummy file filled randomly:
(time this will take probably depends much more on cpu than hdd speed)
now just copy this dummy to another file:
and have a look how long this will take in means of real world performance...
@Falzo
Ah.. sorry, I may have misunderstood. Unfortunately I've already reinstalled the server cause with the iLO install of proxmox IPv6 does not work.
If anyone here is testing ZFS maybe they can post the results..