Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Proper way to benchmark ssds
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Proper way to benchmark ssds

What would be the best way to get real results bypassing the cache of the raid card to get read and write iops count?

Comments

  • prometeusprometeus Member, Host Rep

    I would start disabling writeback and all cache from the raid card config. You should also export a single disk not using any raid... BTW which controller is?

  • @prometeus said:
    I would start disabling writeback and all cache from the raid card config. You should also export a single disk not using any raid... BTW which controller is?

    I mean, I still want the caches but the benchmark should put the difference between raid0, raid10, raid5, raid6 because ioping and dd is useless. See for yourself:

    parts:
    8 x Intel 240GB 520 @ 3Gbps
    1 x LSI 9265-8i + BBU, latest firmware to date
    
    raid10
    dd: 981 MB/s
    ioping -c10: 10 requests completed in 9003.2 ms, 10526 iops, 41.1 mb/s
    ioping -R: 27973 requests completed in 3000.1 ms, 21280 iops, 83.1 mb/s
    ioping -RD: 29363 requests completed in 3000.1 ms, 22464 iops, 87.8 mb/s
    
    raid6
    dd: 961 MB/s
    ioping -c10: 10 requests completed in 9002.4 ms, 8271 iops, 32.3 mb/s
    ioping -R: 30889 requests completed in 3000.1 ms, 24451 iops, 95.5 mb/s
    ioping -RD: 32357 requests completed in 3000.1 ms, 26292 iops, 102.7 mb/s
    
    raid5
    dd: 932 MB/s
    ioping -c10: 10 requests completed in 9003.5 ms, 7599 iops, 29.7 mb/s
    ioping -R: 30709 requests completed in 3000.1 ms, 24142 iops, 94.3 mb/s
    ioping -RD: 31954 requests completed in 3000.1 ms, 25530 iops, 99.7 mb/s
    
  • Write files that are much bigger than the cache? at least 10x bigger? Though you're not going to get accurate results, it should be enough to compare the different raid setups.

  • Even more crap, same command one after another on a completely idle dedicated:

    [root@ssd1 ioping-0.6]# ./ioping -R .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    28883 requests completed in 3000.1 ms, 20999 iops, 82.0 mb/s
    min/avg/max/mdev = 0.0/0.0/16.4/0.1 ms
    [root@ssd1 ioping-0.6]# ./ioping -R .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26144 requests completed in 3000.0 ms, 17134 iops, 66.9 mb/s
    min/avg/max/mdev = 0.0/0.1/0.6/0.0 ms
    [root@ssd1 ioping-0.6]# ./ioping -R .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26087 requests completed in 3000.0 ms, 17061 iops, 66.6 mb/s
    min/avg/max/mdev = 0.0/0.1/0.6/0.0 ms
    [root@ssd1 ioping-0.6]# ./ioping -R .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    30274 requests completed in 3000.1 ms, 23385 iops, 91.3 mb/s
    min/avg/max/mdev = 0.0/0.0/0.6/0.0 ms
    
  • serverianserverian Member
    edited August 2013

    This seems somewhat consistent:

    [root@ssd1 ioping-0.6]# ./ioping -RD .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26907 requests completed in 3000.1 ms, 17626 iops, 68.9 mb/s
    min/avg/max/mdev = 0.0/0.1/0.6/0.0 ms
    [root@ssd1 ioping-0.6]# ./ioping -RD .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26850 requests completed in 3000.1 ms, 17550 iops, 68.6 mb/s
    min/avg/max/mdev = 0.0/0.1/1.8/0.0 ms
    [root@ssd1 ioping-0.6]# ./ioping -RD .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26718 requests completed in 3000.0 ms, 17383 iops, 67.9 mb/s
    min/avg/max/mdev = 0.0/0.1/1.7/0.0 ms
    [root@ssd1 ioping-0.6]# ./ioping -RD .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26791 requests completed in 3000.0 ms, 17479 iops, 68.3 mb/s
    min/avg/max/mdev = 0.0/0.1/9.3/0.1 ms
    [root@ssd1 ioping-0.6]# ./ioping -RD .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26786 requests completed in 3000.0 ms, 17476 iops, 68.3 mb/s
    min/avg/max/mdev = 0.0/0.1/1.7/0.0 ms
    [root@ssd1 ioping-0.6]# ./ioping -RD .
    
    --- . (ext4 /dev/sda1) ioping statistics ---
    26851 requests completed in 3000.1 ms, 17551 iops, 68.6 mb/s
    min/avg/max/mdev = 0.0/0.1/1.7/0.0 ms
    
  • prometeusprometeus Member, Host Rep

    Sorry, I didn't understand. If you aren't in production you can use Megaraid to tweak raid parameters before and after tests

    Ioping showed strange results on some ssd disks to me, so I stopped using it.

    Try to use fio or iozone trying to simulate different use cases.

    @Master_Bo is the specialist of ssd tests, maybe he can give some suggestions :-D

  • @prometeus said:
    Sorry, I didn't understand. If you aren't in production you can use Megaraid to tweak raid parameters before and after tests

    Ioping showed strange results on some ssd disks to me, so I stopped using it.

    Try to use fio or iozone trying to simulate different use cases.

    Master_Bo is the specialist of ssd tests, maybe he can give some suggestions :-D

    No, it's not on production. The node is laying around with a KVM and a OS on a USB stick which is perfect for testing. I'll email Konstantin and see if he can help me out! I'll post the results here for everyone to benefit from them!

  • Use proper testing tools. Having said that you're really not testing anything if you're running 2000 benchmarks at once. Even if there is trim (but in a raid card then...) it means you've effectively knocked the ssd out into a very used state.

    For starters, you can use hdparm. There's also fs_mark, iozone etc.

  • @Master_Bo is currently testing the system. He'll publish the results once all the tests are done.

  • Thanks for kind words. In fact, I run quite a number of tests when I consider using server (VPS/VDS, DS) for my sites.

    Basically, I chose iozone as both informative and real-life measurement tool. I use also fio (to gather IOPS scores, so dear to providers ;), ioping for historical reasons (I don't think its results are more or less realistic, IMO); also I use Phoronix intensive I/O tests (dbench, tiobench etc) and running a replica of real site and testing it under different kinds of stress.

    To those interested, I can modify the scirpts I use to make the mentioned testing semi-automated (producing output I use to publish results on my blog, VPSeer).

  • iozone and bonnie++ are good benchmark tools, in my experience.

  • edited August 2013

    @serverian said:
    What would be the best way to get real results bypassing the cache of the raid card to get read and write iops count?

    If you need a free SSD KVM for a while to use as a comparison, let me know.

Sign In or Register to comment.