New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I would start disabling writeback and all cache from the raid card config. You should also export a single disk not using any raid... BTW which controller is?
I mean, I still want the caches but the benchmark should put the difference between raid0, raid10, raid5, raid6 because ioping and dd is useless. See for yourself:
Write files that are much bigger than the cache? at least 10x bigger? Though you're not going to get accurate results, it should be enough to compare the different raid setups.
Even more crap, same command one after another on a completely idle dedicated:
This seems somewhat consistent:
Sorry, I didn't understand. If you aren't in production you can use Megaraid to tweak raid parameters before and after tests
Ioping showed strange results on some ssd disks to me, so I stopped using it.
Try to use fio or iozone trying to simulate different use cases.
@Master_Bo is the specialist of ssd tests, maybe he can give some suggestions :-D
No, it's not on production. The node is laying around with a KVM and a OS on a USB stick which is perfect for testing. I'll email Konstantin and see if he can help me out! I'll post the results here for everyone to benefit from them!
Use proper testing tools. Having said that you're really not testing anything if you're running 2000 benchmarks at once. Even if there is trim (but in a raid card then...) it means you've effectively knocked the ssd out into a very used state.
For starters, you can use hdparm. There's also fs_mark, iozone etc.
@Master_Bo is currently testing the system. He'll publish the results once all the tests are done.
Thanks for kind words. In fact, I run quite a number of tests when I consider using server (VPS/VDS, DS) for my sites.
Basically, I chose iozone as both informative and real-life measurement tool. I use also fio (to gather IOPS scores, so dear to providers , ioping for historical reasons (I don't think its results are more or less realistic, IMO); also I use Phoronix intensive I/O tests (dbench, tiobench etc) and running a replica of real site and testing it under different kinds of stress.
To those interested, I can modify the scirpts I use to make the mentioned testing semi-automated (producing output I use to publish results on my blog, VPSeer).
iozone and bonnie++ are good benchmark tools, in my experience.
If you need a free SSD KVM for a while to use as a comparison, let me know.