Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Software RAID 10 vs 6
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Software RAID 10 vs 6

randvegetarandvegeta Member, Host Rep
edited November 2017 in General

I'm playing around with some new storage nodes using software RAID. Performance is not at all what I was expecting.

Playing around with RAID 10 and RAID 6. The RAID 10 array has 6 HDDs and 2 SSDs (for cache) while the RAID 6 array is just 6 HDDs with no SSD cache.

I was expecting the RAID 10 array to have significantly better performance but actually the performance seems very similar.

On both arrays, the sequential write speed is around 350MB/s. My tests were mainly focusing on raw read/write speeds rather than random I/O. I suspect that with the SSD cache activated, the random I/O would be better than without. But comparing sequential read/write speeds, the RAID 10 and 6 arrays performed very similarly.

Interestingly when running a VM over ISCSI connecting to these storage servers, the RAID 6 consistently performed better than the RAID 10. Not significantly so, but it was consistently better by several 10MB/s. I have been able to rule out networking issues and the hardware for both storage servers are identical. Resource usage was also relatively low throughout the tests.

Given the similar performance when testing directly on the servers, I turned that RAID 10 array into a RAID 0. Not too surprisingly, the speed more than doubles to around 700MB/s sequential write. So the 350MB/s is not strictly a fundamental hardware limit. But I digress...

Is it that my RAID 6 is performing relatively well or my RAID 10 is relatively poor? Or is it that the performance of these 2 are not all too far from each other these days?

Comments

  • You really want providers to give you their formula for free?

    Thanked by 1WSS
  • Not to mention that a RAID 10 with SSD caching is kind of a waste due to how it works. You'd likely see much better immediate access with a RAID5 and a controller with backup memory. If you really, really, really want to trivially speed things up, you might use an SSD for parity IF IT'S A SOFTWARE RAID, but it's not going to last forever.

    The RAID6 works well because you haven't muddled up the works in a strange way.

    It looks like you just threw random hardware at something and decided to play around.

  • @Hxxx said:
    You really want providers to give you their formula for free?

    It's all in the sauce.

  • @user123 said:

    @Hxxx said:
    You really want providers to give you their formula for free?

    It's all in the sauce.

    Don't tell anyone, but it's just relish, ketchup, mayo, and a little urine.

    Thanked by 1Hxxx
  • user123user123 Member
    edited November 2017

    @WSS said:

    @user123 said:

    @Hxxx said:
    You really want providers to give you their formula for free?

    It's all in the sauce.

    Don't tell anyone, but it's just relish, ketchup, mayo, and a little urine.

    "A little urine?" Bullshit. That's the main ingredient.

    Edit: Fixed at @WSS's request to preserve proprietary LEB provider secrets

    Thanked by 2WSS Hxxx
  • Sssssh. Don't give away the secret!

  • @WSS said:
    Sssssh. Don't give away the secret!

    Fixed.

    Thanked by 1Hxxx
  • WSS said: and a little urine.

    Midstream. This is very important.

    Thanked by 3WSS Hxxx bugrakoc
  • MaouniqueMaounique Host Rep, Veteran
    edited November 2017

    It depends on how the striping is done and how many places it can write in the same time. The controller has a certain BW writing to the disks, even if they write the same shit, it still gets doubled in raid 10 while in raid 6 it is even tripled (I assume you have only one mirror of 3 disks, not actually 2 mirrors of 2 disks, that would be pretty bonkers).
    Because you have SSD caching, though, presumably on the same controller, that is also taking bw, actually making things worse.
    However, this is only for sequential writes. A real world scenario (on non-storage nodes where storage speed is important) would involve multiple reads and writes in various places so the BW wont be a problem anymore, but the IOPS in which case a cache with many IOPS will help a lot.
    Also, your scenario with 6 disks is pretty peculiar too, IMO, over 4 you should consider HW raid because of the BW issues especially with SSD if you do not have a mostly idle setup/storage node, the bus will only send/receive the data once, while in SW raid it will multiply it a lot depending on many factors, especially in mirroring scenarios.

    That being said, 350 MB/s for a storage node is decent and caching it is kinda over the top for storage because sequential writes are expected and caching does not really make sense while the wire speed will never be even close to saturate the controller, even old IDE ones.

  • randvegetarandvegeta Member, Host Rep

    Maounique said: Also, your scenario with 6 disks is pretty peculiar too, IMO, over 4 you should consider HW

    It's due to the limit of 8 drives fitting in the chassis. I've got 6 in RAID 10 and the SSDs in RAID1 which is the cache.

    For the 2nd server, it was 6 because my objective was to have a comparable capacity for both servers. So the Raid 10 server has 6x4TB and the Raid 6 server has 6x3TB. Both provide appprox 11TB of usable storage in their respective RAIDs.

    Maounique said: That being said, 350 MB/s for a storage node is decent and caching it is kinda over the top for storage because sequential writes are expected and caching does not really make sense.

    I'm not looking for crazy level of performance. To be honest.. 350MB/s is probably good enough as it is already more than twice as good as single disk. And if really only used for storage, then even 20MB/s would probably still be okay. But the objective is to get it to be comparable locally attached disks for general purpose use. Not performance oriented, but also not ignoring performance all-together.

    The SSD caching is to improve general performance. I thought it might have helped with read/write performance a little, but it seemed to make no difference. Probably replacing them with 2 HDDs and making it an 8 drive RAID 10 array would help improve performance slightly.

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2017

    randvegeta said: Probably replacing them with 2 HDDs and making it an 8 drive RAID 10 array would help improve performance slightly.

    It will if the problem is bw because will not write on 8 disks and read from two in the same time all data going back and forth through the controller in case of sequential writing.
    However, in a situation like 6 vs 8 in raid 10 without cache, there will be 0 improvement if the controller's BW is the issue, the data will pass with the same speed through it.

  • Just go directly with 10.it is the most stable.

    raid 1 and 5 is super unstable, the risk of disk being dropped increase seriously.

    and the stability is even worse than raid0. Honestly

  • @overclock said:
    Just go directly with 10.it is the most stable.

    raid 1 and 5 is super unstable, the risk of disk being dropped increase seriously.

    and the stability is even worse than raid0. Honestly

    #dicks

Sign In or Register to comment.