Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Anyone here with 2TB or 4TB drives at Hetzner?
New on LowEndTalk? Please Register and read our Community Rules.

Anyone here with 2TB or 4TB drives at Hetzner?

AmitzAmitz Member
edited June 2016 in Help

I wonder if anyone here is having a server @ Hetzner with 2TB or 4TB drives in Software Raid-1 and would be willing to share the IO that those drives are capable to deliver?

The infamous "dd" test and (if possible) "ioping" results would be just great. I already have servers with their 3TB drives and would love to see whether the other models are a tad faster.

Thanks a lot in advance & kind regards
Amitz

P.S.: For comparison, here are the results of the 3TB drives

dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.56555 s, 96.5 MB/s

dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.67656 s, 94.6 MB/s

dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 5.59846 s, 95.9 MB/s
ioping -c 50 /  
--- / (ext4 /dev/md2) ioping statistics ---
50 requests completed in 49.1 s, 693 iops, 2.71 MiB/s
min/avg/max/mdev = 195 us / 1.44 ms / 19.1 ms / 4.01 ms

ioping -c 50 /
--- / (ext4 /dev/md2) ioping statistics ---
50 requests completed in 49.0 s, 2.46 k iops, 9.62 MiB/s
min/avg/max/mdev = 195 us / 405 us / 7.36 ms / 994 us

ioping -c 50 /
--- / (ext4 /dev/md2) ioping statistics ---
50 requests completed in 49.1 s, 811 iops, 3.17 MiB/s
min/avg/max/mdev = 198 us / 1.23 ms / 15.4 ms / 3.16 ms

For those who care:
You can now find me at https://talk.lowendspirit.com or https://www.hostballs.com

Comments

  • rds100rds100 Member

    What 3TB drives are these? 7200RPM or 5400/5900RPM? Are your partitions aligned on 4K?

    -

  • rm_rm_ Member

    To get meaningful ioping results, use ioping -R /.

    Thanked by 2Geekoine Amitz
  • lootloot Member

    2 x 2 but in LVM

    dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test
    12+0 records in
    512+0 records out
    536870912 bytes (537 MB, 512 MiB) copied, 2.08897 s, 257 MB/s

    dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test
    512+0 records in
    512+0 records out
    536870912 bytes (537 MB, 512 MiB) copied, 2.06934 s, 259 MB/s

    dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test
    512+0 records in
    512+0 records out
    536870912 bytes (537 MB, 512 MiB) copied, 2.09373 s, 256 MB/s

    --- / (ext4 /dev/mapper/vg0-root) ioping statistics ---
    50 requests completed in 49.1 s, 409 iops, 1.60 MiB/s
    min/avg/max/mdev = 224 us / 2.44 ms / 21.4 ms / 5.53 ms

    --- / (ext4 /dev/dm-0) ioping statistics ---
    50 requests completed in 49.1 s, 741 iops, 2.90 MiB/s
    min/avg/max/mdev = 201 us / 1.35 ms / 15.9 ms / 3.13 ms

    Both are 2x2 and LVMed

    Thanked by 1Amitz

    Subversion? That's some sort of software, right?

  • vfusevfuse Member, Provider

    4TB drives not in RAID1 tho (jbod)

    dd bs=1M count=512 if=/dev/zero of=test conv=fdatasync && rm -f test
    512+0 records in
    512+0 records out
    536870912 bytes (537 MB) copied, 5.28714 s, 102 MB/s
    
    --- / (ext4 /dev/disk/by-uuid/b3b14b31-5aca-4a18-8379-9c80876be454) ioping statistics ---
    50 requests completed in 49.2 s, 316 iops, 1.2 MiB/s
    min/avg/max/mdev = 66 us / 3.2 ms / 81.6 ms / 12.0 ms
    
    Thanked by 1Amitz

    NIXStats monitoring Web, Server(Linux, Windows - $9.95/m), Logging (Free!) and Blacklists (start at 512 for $3.75/m) - Uptime Report - API Docs

  • AmitzAmitz Member

    Thank you, guys!

    For those who care:
    You can now find me at https://talk.lowendspirit.com or https://www.hostballs.com

  • ehabehab Member

    didn't read all but did you try enable write cache

    hdparm -W /dev/sdX

    replace X with what you have and run tests again

    • do not prepay > 1 year and check for reviews/support
    • only use monthly from a provider operating < 1 year 🍆
  • AmitzAmitz Member
    edited June 2016

    @ehab said:
    didn't read all but did you try enable write cache

    hdparm -W /dev/sdX

    replace X with what you have and run tests again

    No change. I guess that write cache was already enabled...
    But to be precise - It's not that I am unhappy with the disk speed. I just wanted to know whether a model change would also bring a benefit concerning disk IO. Obviously not (or at least not significantly).

    For those who care:
    You can now find me at https://talk.lowendspirit.com or https://www.hostballs.com

Sign In or Register to comment.