Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Achieve 2.0 GB/s + I/O Speed With ZFS On Proxmox For Your Dedibox Limited 1215 Server
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Achieve 2.0 GB/s + I/O Speed With ZFS On Proxmox For Your Dedibox Limited 1215 Server

Can't say I've tested this extensively or know much about it in regards to reliability or accuracy of the results.. but I was so amazed at the dd test from just choosing ZFS in RAID 0 from the Proxmox install that I thought I would post it here..

Just a word of caution.. this is only for testing and probably has no real value in a live server environment and that playing around with RAID or Choosing the wrong RAID type may lead to drastic data loss.. So please!! only try this on a test server.

The Wiki:


http://pve.proxmox.com/wiki/Storage:_ZFS

http://en.wikipedia.org/wiki/ZFS

Specs:

Dedibox Limited 1215

  • Intel Xeon E3 1220 V2 @ 3.10 Ghz

  • 16 GB RAM

  • 2 x 450GB SAS 15K

  • HP Smart Array P410 Controller 256MB Cache with BBU

Procedure:


Well not much to do.. my settings are as follows:

  • In my online.net console my Hardware RAID is set to RAID 0

  • Proceed to do a bare-metal Install of Proxmox from iLO choosing ZFS RAID 0 as file type as oppose to the default ext3. You can choose this setting under the options button in the Proxmox installer.

Note:


In RAID 0 with ext3 I would usually achieve a dd of 230/MB/s to 290/MB/s

Sample Results:


ZFS RAID 0


dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.416202 s, 2.6 GB/s

dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.449092 s, 2.4 GB/s

dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.499102 s, 2.2 GB/s

dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.475973 s, 2.3 GB/s

dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.46622 s, 2.3 GB/s

dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.426818 s, 2.5 GB/s

dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 0.439555 s, 2.4 GB/s

Thanked by 1dgprasetya

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    OpenVZ with ploop?

    Thanked by 1jar
  • earlearl Member

    @AnthonySmith said:
    OpenVZ with ploop?

    The test was done on the host Proxmox node so no virtualization.

  • AnthonySmithAnthonySmith Member, Patron Provider

    Oh sorry I missed that, pretty interesting results then, how big is the cache on the raid card, just wondering what you would get with bs=128 count=32k ?

  • jarjar Patron Provider, Top Host, Veteran

    @AnthonySmith said:
    OpenVZ with ploop?

    Great minds think alike :P

    Thanked by 2AnthonySmith netomx
  • AnthonySmithAnthonySmith Member, Patron Provider

    oh god sorry I need to learn to read and retain the information already posted, you said it was 256mb cache, sorry :s

    very impressive then.

    Thanked by 2earl netomx
  • earlearl Member

    Results from an OVZ CT.. not sure what ploop is so probably not used.

    dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.47527 s, 2.3 GB/s
    
    dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.409548 s, 2.6 GB/s
    
    dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.50802 s, 2.1 GB/s
    
    dd if=/dev/zero of=iotest bs=64k count=16k conv=fdatasync && rm -fr iotest
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 0.417439 s, 2.6 GB/s
    
  • earlearl Member

    @AnthonySmith said:
    Oh sorry I missed that, pretty interesting results then, how big is the cache on the raid card, just wondering what you would get with bs=128 count=32k ?

    Host Proxmox

    
    dd if=/dev/zero of=iotest bs=128k count=32k conv=fdatasync && rm -fr iotest
    32768+0 records in
    32768+0 records out
    4294967296 bytes (4.3 GB) copied, 1.01851 s, 4.2 GB/s
    
    dd if=/dev/zero of=iotest bs=128k count=32k conv=fdatasync && rm -fr iotest
    32768+0 records in
    32768+0 records out
    4294967296 bytes (4.3 GB) copied, 0.96397 s, 4.5 GB/s
    
    dd if=/dev/zero of=iotest bs=128k count=32k conv=fdatasync && rm -fr iotest
    32768+0 records in
    32768+0 records out
    4294967296 bytes (4.3 GB) copied, 0.98979 s, 4.3 GB/s
    

    From a OVZ CT

    
    dd if=/dev/zero of=iotest bs=128k count=32k conv=fdatasync && rm -fr iotest
    32768+0 records in
    32768+0 records out
    4294967296 bytes (4.3 GB) copied, 1.01666 s, 4.2 GB/s
    
    dd if=/dev/zero of=iotest bs=128k count=32k conv=fdatasync && rm -fr iotest
    32768+0 records in
    32768+0 records out
    4294967296 bytes (4.3 GB) copied, 0.963894 s, 4.5 GB/s
    
    dd if=/dev/zero of=iotest bs=128k count=32k conv=fdatasync && rm -fr iotest
    32768+0 records in
    32768+0 records out
    4294967296 bytes (4.3 GB) copied, 0.987148 s, 4.4 GB/s
    

    Seems not much difference from OVZ to non virtualized..

  • trying this on home pc. looks good ^

    time to try it on KVM

    (in vmware workstation)

  • sc754sc754 Member

    @earl said:

    That's because ovz is only a chroot I would imagine

    Thanked by 1earl
  • earlearl Member
    edited April 2015

    @AnthonySmith said:
    oh god sorry I need to learn to read and retain the information already posted, you said it was 256mb cache, sorry :s

    very impressive then.

    I was thanking you for the impressive comment not the other.. would not want you to think I was being insulting...:p

  • ZFS compression I guess? Or something related to the write cache.

    Thanked by 1netomx
  • earlearl Member
    edited April 2015

    @yomero said:
    ZFS compression I guess? Or something related to the write cache.

    Not sure really.. I did enable write cache on both the drive and the raid card but it did not make a difference on ext3.. the ratio was set to 25 read n 75 write.. at least I think thats what it was..

    For what its worth the dedi does feel it performs faster.. in winscp browsing folders is definely quicker.. maybe not ramnode quick but that could be cause this is in France vs Atlanta

  • earlearl Member

    Wonder how quick zfs would be on a software raid ram drive

  • FalzoFalzo Member

    please copy a 1 GB iso of something from disk to disk and get the time this will take, thanks ;-)

  • earlearl Member
    edited April 2015

    @Falzo said:
    please copy a 1 GB iso of something from disk to disk and get the time this will take, thanks ;-)

    What you are asking of me seems like a lot of work :p and I dont have a another drive since its 2 drives in RAID 0.. wouldn't dd be the same thing? its 1gb copied

  • FWIW ZFS can be configured to have a lazy flushed write cache so 1GB write isn't proving anything (assuming free RAM >=1GB). Secondly, /dev/zero is eminently compressible - you're essentially benchmarking how quick the CPU is at compressing a giant stream of zeroes, rather than disk IO

    Thanked by 3earl yomero deadbeef
  • earlearl Member
    edited April 2015

    @overclockwise said:
    FWIW ZFS can be configured to have a lazy flushed write cache so 1GB write isn't proving anything (assuming free RAM >=1GB). Secondly, /dev/zero is eminently compressible - you're essentially benchmarking how quick the CPU is at compressing a giant stream of zeroes, rather than disk IO

    Yeah I was thinking it must be something like that.. Hard to believe that simply using ZFS can give such an increase in i/o.

    Update:

    Maybe this is a more accurate test?

    hdparm -t --direct /dev/sda
    
    /dev/sda:
     Timing O_DIRECT disk reads: 1172 MB in  3.00 seconds = 390.13 MB/sec
    
  • @earl does HP Smart Array 410 support JBOD? is it safe to put it RAID 0 and use ZFS?

  • @souvarine RAID 0 means not safe IMO

    Thanked by 1souvarine
  • RAID0 of one drive is safe (you loose only the SMART pass through, but the controller monitors that) - The P410(i) has no HBA support (the mainboard has SATA ports most likely but these are usually not cabled and only "designed" for DVD drives), so this is the only solution anyway.

    Thanked by 3earl comXyz souvarine
  • souvarine said: does HP Smart Array 410 support JBOD? is it safe to put it RAID 0 and use ZFS?

    No its a RAID controller and should NOT be used with ZFS. You need a HBA not RAID controller for proper operation. It will work, but ZFS is designed to interrogate the disk for data rather than a RAID controller.

    Thanked by 3souvarine earl comXyz
  • earlearl Member
    edited April 2015

    @souvarine said:
    earl does HP Smart Array 410 support JBOD? is it safe to put it RAID 0 and use ZFS?

    I don't believe it does.. but I've not access the RAID cards menu cause I don't think we are suppose to..

    As @MarkTurner mentioned ZFS is suppose to have direct access to the drive for it to function correctly.. not with an intermediate like a RAID card between ZFS and the HD. so this method is definitely not recommend, just for testing.

    Just to add the DD stats may be great for DD p0rn but ZFS does not necessarily improve the performance of the I/O

    Thanked by 1souvarine
  • FalzoFalzo Member

    what I meant was please simply do a copy of a normal file/iso ~1GB size (just some real data) on your hdd/raid whatsoever from one folder to another and measure time taken.

    first create a dummy file filled randomly:

    dd if=/dev/urandom of=iotest bs=64k count=16k

    (time this will take probably depends much more on cpu than hdd speed)

    now just copy this dummy to another file:

    dd if=iotest of=iotest2

    and have a look how long this will take in means of real world performance...

  • earlearl Member

    @Falzo

    Ah.. sorry, I may have misunderstood. Unfortunately I've already reinstalled the server cause with the iLO install of proxmox IPv6 does not work.

    If anyone here is testing ZFS maybe they can post the results..

Sign In or Register to comment.