New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Performance Issue - 2 x SSD Evo 840 Pro 1TB (RAID1) with Adaptec 6405E Hardware Raid Controller
Hi, we have a dedi with 2 disks SSD Evo 840 Pro 1TB connected to an Adaptec 6405E in a RAID1 and we get this results with dd:
[root@hw3 adaptec]# dd if=/dev/zero of=/home/output.img bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 12.2099 s, 176 MB/s
[root@hw3 adaptec]# dd if=/dev/zero of=/home/output.img bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 11.6456 s, 184 MB/s
[root@hw3 adaptec]# dd if=/dev/zero of=/home/output.img bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 33.5191 s, 64.1 MB/s
[root@hw3 adaptec]# dd if=/dev/zero of=/home/output.img bs=8k count=256k
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 50.2451 s, 42.7 MB/s
Anyone can tell us what may be wrong in here ?
We should get about 1.5GB/s or more, right ?
Thanks
Comments
i had a similar problem in the past with 850 evo , i just change a lsi controller and was all ok. Maybe is something with this controler... at least in my case the hdd was new.
Your speeds may have to do with caching...
Does the RAID card have a cache and battery? If so, be sure this is enabled and that "write back" is enabled. Then perform your tests.
You may be also able to force write back on without a battery unit, but this should only be enabled for testing. You can lose data if this is enabled in production.
I don't have much experience with Adaptec cards, but the LSI (Now Broadcom?) cards we use have some additional performance settings when a BBU unit is used with SSDs.
So, I recommend playing with the caching and write back settings and re-run tests to see where you stand. That may give you a jumping off point to understand where your speeds are falling off.
Finally, a RAID 1 may not get you the 1.5GB/s range. You may need a larger RAID set to get there, but you should be seeing higher speeds that you are currently seeing.
There's something wrong there, though dd isn't an accurate test. Still, those results are pretty bad.
Is it connected with 6Gbps, or 15Gbps?
Have you tried another RAID Controller?
Sometimes, software RAID is faster than hardware RAID, due to the controller and port speeds.
See this:
Bad HW Raid card, swap it for another one.
if you want a true test then run the following
hdparm -Tt /dev/sda
hdparm -Tt /dev/sdb
i have the same results from my dedicated server with similar 1tb drives
https://forum.piohost.co.uk/showthread.php?tid=3
Try it without the RAID card - hardware RAID is mostly pointless when you are using SSDs. Its more likely to hurt than help.
that depends on the raid controller. If you're using a 10 years old model (for example adaptec 5405) with ssds it's clear that the controller will slow down the performance. But current models are fine for ssd usage. We're using adaptec series 6 models with ssds without problems. Series 7 or 8 would be even better.
You can use whatever model you want, but save for a very small number of scenarios, there isn't really any real world benefit to using hardware RAID with SSDs.
You also loose TRIM support, which is rather important especially for EVO/PRO/etc. drives and good to have with enterprise drives. Garbage collection without TRIM can really cripple the array performance as time progresses.
MDADM/ZFS (software RAID) supports TRIM out of the box.
HBA + MDADM/ZFS = Good to Go!
If you want to run ESXi and the likes, you are stuck with hardware raid since it doesn't support software (last I checked)
Yeah running hw raid is a total waste with ssds. We had a bigger failure rate with the ssds in hw raid than sw raid setups. Also are the firmwires up to date also on the ssds.
Here is the info from the controller:
Yes, we will remove the RAID controller and that's it.
Thanks for your comments, very helpful !
As you could see, it's a 6GBps port
6GBps = 48 Gbps.
Your expectations are unrealistic. 6Gbps port gives you only theoretically 0.75GBps so two ports theoretically 1.5 GBps, which doesn't mean that this is the data rate you will achieve. I'm yet to see a SATA III SSD combo in RAID 1 that is able to work with that data rate. If you do get similar results to 1.5GB/s that would be the data rate of your RAID Card's / RAM cache, and not the one of drives.
Obviously the data points suggest that the raid controller is one culprit, actually the major one.
There are many misunderstandings and pitfalls, particularly when SSDs are involved.
Nice example: What is Raid 1? It boils down to writing out two copies of the buffer to two disks. Which is something any halfway decent OS can do easily - and quickly. Now, let's look at the physical side: data goes from buffer through cpu bridge and then via some kind of serdes directly to sata PHY and then though the sata cables to the drives.
Doing the same with hw raid controller simply pushes the mirroring to the raid card - which is guaranteed to be slower than the cpu bridge as there is no raid processing whatsoever involved, not even simple striping as in raid 0.
In other words: Using a raid controller card in your situation is bound to slow things. Plus it is a wall for TRIM.
To make things worse, your raid controller has a cache. Do not assume that disabling the cache on the raid card does actually disable it! In fact, it will almost certainly use it because that's it only way to cope with fast data input and because that cache is way bigger than its own RAM (which usually is ridiculously small but sufficient) and because that's how writing to disks works.
So, using a raid 1 controller you simply add a) a slower cpu and b) yet another buffer into the chain.
And that's what you see in your dd values.