New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Typical disk speed for SSD RAID 10?
Based on your experience, is the following a typical disk speed for SSD RAID 10?
This is on a dedicated server.
Below/Above average?
fio Disk Speed Tests (Mixed R/W 50/50):
---------------------------------
Block Size | 4k (IOPS) | 64k (IOPS)
------ | --- ---- | ---- ----
Read | 82.36 MB/s (20.5k) | 478.74 MB/s (7.4k)
Write | 82.58 MB/s (20.6k) | 481.26 MB/s (7.5k)
Total | 164.94 MB/s (41.2k) | 960.01 MB/s (14.9k)
| |
Block Size | 512k (IOPS) | 1m (IOPS)
------ | --- ---- | ---- ----
Read | 751.05 MB/s (1.4k) | 749.74 MB/s (732)
Write | 790.96 MB/s (1.5k) | 799.68 MB/s (780)
Total | 1.54 GB/s (3.0k) | 1.54 GB/s (1.5k)
TIA
Comments
Pretty good for raid 10 SSD..
How many drives? Enterprise or consumer?
4 1.92 TB drives Enterprise.
It's a bit below average IMHO, but it's as expected. All depends on the make and model of each drive and what they're rated for (endurance vs performance, etc).
SW or HW raid
HW.
https://www.micron.com/products/ssd/product-lines/5200/
Your SSD performance is better than mine
fio Disk Speed Tests (Mixed R/W 50/50):
Not that it answers your question, but for those who want to compare RAID10 SSD to RAID-1 NVMe, here are the results of a small and idle node we have with RAID-1 NVMe drives:
The SSD RAID-10 speeds look fine and seem about on par with what I recall from previous and similar setups in the past.
@logaritse Your result is also for SSD with RAID 10?
@MannDude You tried RAID 10 with NVMe drives? Wonder what the speed will look like.
I don't trust those benchmarks, but assuming you ran them on a halfway recent linux kernel the results are not great. Not particularly poor but what I'd call lower end mid range, possibly even just upper end low range considering that it's a hw Raid 10.
But then it of course depends on what you will use them for. For a DB? Not a great idea. For "you know, stuff, OS, files" damn good enough.
Yes Raid 10 SSD Enterprise (1.92TB)
I believe it will be capped by cpu/ MB PCIe So no point to do nvme raid10.
Regarding
CPU also play a role. Which one you are using?
It was run on this CentOS 7.9 3.10.0-1160.31.1.el7.x86_64
The server was idle while running it too and it was just provisioned as well.
This is the server in question: https://www.hetzner.com/dedicated-rootserver/px93
The adapter is Adaptec ASR8405 https://storage.microsemi.com/en-us/support/raid/sas_raid/asr-8405/
If this is below the expected speed for this server and SSD RAID 10 disks, it will be worth checking with support or it is not a problem to support as long as the disks are running?
All 4 disks show ~20K Power_On_Hours.
@ViridWeb Linked the server info above.
I presume that's a recent version? (sorry, I don't care much for CentOs and hence am not up to date).
That's good.
Adaptec adapters usually are OK. And I presume the Raid set up/configuration were done via BIOS and you simply get your 4 disks presented as one linux device, correct?
Now that I know that the Raid adapter is not some old board but a relatively decent one that should be good enough for 1 GB/s (maybe even a bit more, depending on details) and presumably configured by Hetzner I'm even less excited about the performance numbers you presented.
Caveat: I don't know that dedi product. Maybe it's one that doesn't promise speed demon disks and so that Raid array is kind of normal for that product. Keep in mind that there are many who automatically click on "SSD" and don't think or care about what they actually get as long as it's SSD and obviously providers adapt to that ...
That doesn't mean a whole lot. What's relevant is how many GB or TB have been written to it during those a bit over 2 years (and obviously how many errors the devices had). 20K hours can mean anything between "works great" and "pretty much worn out drives".
What do you intend to use them for?
I prefer Ubuntu but CentOS is what WHM/cPanel likes atm. Circa 2020.
I setup RAID 10 in Hetzner's rescue system using Adaptec's tool and now it is a single logical device.
Per SMART, it says 3% of its lifetime is used. Full output: https://pastebin.ubuntu.com/p/RXkk9Q2ZV3/
Busy WHM/cPanel server.
Good enough
Looks good to me, except maybe
which might be somewhat large. Usually with SSDs I'd go for smaller stripe sizes.
Thanks. Looks good to me. Just the about 150 TB written ... hmm, what's the TBW for those disks? Preferably solidly north of 500. Plus 'unexpected power down events'? Doesn't that Adaptec have a battery? If not you are betting on the SSDs being enterprise types with onboard "UPS" caps.
With MySQL I guess? Get your stripe size lower, max 16 KB if possible.
Thanks @jsg
3500 TBs per their datasheet for the ECO level for the 1.92 TB drives. https://gzhls.at/blob/ldb/2/7/2/a/5694e04e1e329ba28a48d6fe5f6cf598a7d7.pdf.
It doesn't.
They mention this: Enhanced power-loss data protection with data protection capacitor monitoring
I guess this is what you mean by "UPS" capacitors.
Yes, mySQL too. What would be the benefit of reducing it to 16 KB?
Yes, I guess so too.
Way longer SSD life and highly likely better performance for your use case. Plus, keep in mind how DBs write to disk ...
It's a good thing you didn't tell that to Intel or AMD who both have NVMe raid support in their CPU's like Threadrippers and above.
Well then it good to know ! But for my last record MB X470D4U2 provide half speed in 2 m.2 sticks and ya i know there is things supporting now but i still believe no point to do so now maybe in the future?
Regarding
The reason invariably is PCIe speed and not enough PCIe lines.
PCIe3.0 x2
That's a shitty board. It's expected to be gimped when using x2 lanes. It makes no sense to me why vendors bothered with them.
With other CPU'S and MB's, using PCIe bifurcation you can have 8-11 (or more?) NVMe drives on PCIe 3.0 x4 lanes in a system without external raid card.