New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Anyone setup SSD RAID 6?
randvegeta
Member, Host Rep
in General
Has anyone setup a RAID 6 array, only using SSDs?
How is the performance compared to both RAID 10 and single SSDs?
Any drive failures?
Comments
You'll kill drives in that configuration quicker than you would in RAID10 due to the extra writes. Performance should be lower than a RAID10 config with the same number of drives.
Tl;dr : not worth it with ssd
Are the number of additional writes really so much larger? I've read this before, and of course, given the parity drives, there is more work (writes) required to maintain proper parity, but is the additional write operations really so huge that there is a significant difference in life span?
As for performance, yes obviously RAID 10 would be faster, in the same way RAID 10 makes HDDs faster. But I would imagine that RAID 6 SSDs would have a significant performance boost over RAID 6 HDDs, and possibly still better than single disk performance.
I would setup a test machine, but I don't have many SSDs spare, so was wondering if anyone had any real word experience with an SSD RAID 6 array. The consensus definitly seems to be that RAID 6 will have worse performance than RAID 10 and drives will die sooner, but I have not been able to find any actual metrics for this.
Maybe the performance hit is not so large? Maybe the extra disk space makes up for the higher failure rate? Or maybe it's a complete waste of time because maybe RAID 10 HDD performance is comparable, but a lot cheaper.
or use SLC ssd as caching layer over large HDD arrays.
This is what we do already. So the real question is if the SSD cached HDD RAID 10 arrays have better performance than RAID 6 SSDs.
So far, SSD caching seems to have a nice cost/performance ratio, which is feasible to run even at the low end.
But RAID 10 SSDs means only 50% of your disk space is actually usable, and our current Virtuozzo Cloud Storage means only 33%, making SSD storage prohibitively expensive.
If the performance of RAID 6 SSDs is not significantly better than RAID 10 HDDs + SSD cacheing, then it's just not worth while. I suspect this is the case, but I've not found any data on this online. And I don't really fancy spending $2,000 just to test this.
>
Hello
I've got a server with 8x500 GB Samsung 850 Evo's running Raid 6 on HP Perc Raid card.
proxmox installed on it with some light usage linux vm's for a corporate customer.
Tell me what kind of test's you want to see, I'll do it for you.
9.75k IOPS? That's pretty impressive actually.
I assume the RAID card has caching in itself. Does it also have a battery?
Did you by any chance test the same physical setup on RAID 10?
LSI, for example, recommends to disable caching for SSD drives for better performance.
LSI also disable caching by default as soon as the controller recognize ssds while creating a new array.
If you don't use shitty drives, it doesn't really matter.
Most people won't exceed the endurance rating on enterprise disks anyway, if a disk actually dies before the endurance is up, it's easy to RMA it and get a new free disk.
For OP:
Yes, I've run with raid 6 with SSDs in about 600 servers using SM863 960GB disks - each array would consist of 22 drives in total, 2 of them being hot spares and the remaining 20 would be the actual raid.
It gave a usable space of roughly 16 terabytes (actual usable), where a similar raid 10 would give 8.94 terabytes.
This specific setup required a whole lot of storage but still enough parity for two disk failures, thus we went for raid 6. Rebuild times being like 35 minutes or so if I remember correctly.
We never had disks that died due to endurance being reached (it's about 6 petabyte on that specific version), disks would die for other reasons (such as power failure in a DC twice in one day, which took some equipment with it).
The performance was nice, reliability was nice - really nothing to complain about :-) and Samsung is amazing at handling RMA's for partners so ya.
Not sure I'd do it with consumer grade drives with a low endurance :-)
The workload by the way, was heavy random IO (reads and writes), with millions and millions of small files and sometimes big chunks of data as well.
Generally saturated the network before saturating the array.
Also @randvegeta - 9.75k iops test is sequential read in the test - so not really sure how impressive it is for an SSD array :-D
Ah. Missed the sequential part. Indeed nothing impressive for that then.
Hi,
Testnode: Dual Xeon, 384GB Ram, LSI SAS Controller, 12x 1TB SATA SSD Raid6 (WT, Direct IO, No Readahead)
I ran the tests as reported here: https://serverscope.io/trials/rmyz#io
dd:
FIO random read:
FIO random direct read:
FIO random write:
FIO random direct write:
@randvegeta as expected direct random writes are a downside of raid6.
Is that software RAID or hardware RAID?
hardware Raid without special tuning/configuration of LSI controller. Just changed to WT, direct IO and readaheadnone.
I agree 100%. Wear is almost doubled. Avoid booth Raid 5/50 and Raid 6/60 with SSDs.
Also even big arrays of Raid 10 should be mix of drives. For example A row Samsung, B row Intel