New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
RAID0 SSDs
SimpleNode
Member
Have a few projects that require SSDs/Lots of IOPS.
What do you think about putting SSDs in RAID0? From what I know, it should be fairly safe as SSDs rarely fail, and I'll take daily backups.
Comments
Who lied you so badly?
+1, not sure who has told you that.
I got almost 10 SSDs failed on me in last 6 months.
Well, I'd assume they die far less than regular disks.
Oh well, I might as well not take the risk and just use RAID10.
I still wonder why the SSD's fail more often then HDD's, as they don't have any mechanical elements and on theory should be more reliable. RAID1 atleast is an absolute must. I'd say to go with RAID10 on VPS node, atleast to have more spare space per node and increase the performance.
For my project, I'm trying to choose between redundancy and a f**kton of I/O.
Then again, I don't know the difference in performance between RAID10 and 0.
Go with Intel SSDs @ RAID5 with a LSI 9266-8i, almost same performance as RAID0
I'd say go with RAID10.. It literally is RAID1 + RAID0, as long as you have a good hardawre raid card.
The amount of a writes a NAND has before it dies on a SSD, especially consumer cheap MLC and now TLC are getting lower and lower as they do die shrinks. With that of course, the price falls.
@Alex_LiquidHost That brings me back to a post I made a while back: I'm not sure what cards Incero use.
Adaptec 2405 mostly. BBU ones are 5405 I think.
No wonder you have so many failed SSDs ;-)
I already have a node with a H/W RAID Card from Incero, how would I go about finding it's model?
lspci only shows this;
"Adaptec ACC-RAID" isn't that descriptive :P
Install arcconf.
]# arcconf getconfig 1
Controllers found: 1
Controller information
Controller Status : Optimal
Channel description : SAS/SATA
Controller Model : Adaptec 2805
@Kenshin Yup, 2405.
Good card, I'm using the 2805 on most of my nodes since the cost of the card is probably a few tens of dollars more than the 2405.
But remember it doesn't support SATA3, so your throughput is going to be stuck at SATA2 levels (~200MB/sec). You'll need the 6405 for SATA3.
Let's put it this way to close the ssd topic. There are no "enterprise" class 2.5 SSDs - only consumer grade. Think of your shitty WD's or Hitachi 5400 rpm sata drives that have no place in a server - that's what an SSD today is from a reliability perspective.
If you are going to use SSD's they have a finite liftime- do it only in raid10, make sure you have spares handy, and monitor the heck out of them. From a performance/reliability perspective, intel seems to be the best in my experience.
On a busy i/o intensive system, even without any type of drive failure you should consider ssd lifetime at a maximum 12-15months (rule of thumb.) if that's something you can't live with, go with tried & true 15k rpm sas drives.
What about these?
http://h18000.www1.hp.com/products/quickspecs/14038_div/14038_div.HTML
@Damien - Ok, you got me. They've stuck them into a 2.5 inch tray. Wonder about the specifics on the 3 year warranty and what exclusions there are, though that'd certainly be the way to go.
That said, the 800gb SSD there list is $9,179.99 for one drive. (ref: http://www.cdw.com/shop/products/HP-800GB-SAS-ME-2.5IN-EM-SSD/2851265.aspx ) and the 400gb is $4600 and some change.
I guess if you're poor and can't afford a real SSD.
http://www.provantage.com/ocz-technology-zd4rm88-fh-3-2t~7OCZD04J.htm
@SimpleNode RAID-0 using a hardware RAID controller is awesome, but software RAID-0 not so much regardless if the drives are SSD or not.
And why is that? You'll get better performance out of software raid0.
@SimpleNode - what on earth are you doing that a good RAID10 setup would not be fast enough?...
Especially since you can keep adding disks
Video editing of lots and lots of porn... The people need it fast!
Intel has the E series based on SLC chips, as well as the eMLC ones (710, 910) which have decent reviews. It really depends what you're doing with the SSD.
Another thing regarding SSDs, failure usually happens because of (1) firmware or (2) write failure. My entire office runs on the Intel G2 SSDs, I had 1 die due to firmware bug, everything else is still working. Server wise, I had a pair of old generation ones die due to excessive writes but it was still readable. RAID0 is a decent option especially if you're running MySQL with replication. Otherwise for more static data, daily or hourly rsync to normal harddisks would be more than suffice. Performance + reliability in one package.
End of the day, it's about knowing the pros/cons and deploying SSDs correctly will still result in a good deployment. I've had RAID1 arrays fail together before, so shit can happen on normal drives. Nothing is failure proof, but so far based on the number of SSDs I have around, pretty decent as long as backups are always readily available.
Out of how many?
If RAID1 was used, wouldn't both drives get the same wear, therefore dying at the same time?
@SimpleNode Not exactly
I am not stopping you to use it. Even RAID1 can benefit from a hardware RAID controller. Software RAID is horrific for any kind of RAID. Again, I am not contradicting you, so if you want to use software RAID then please do.
Those are some of the best SSD drives that Intel has ever made.
That's actually an interesting point when it comes to SSD drives. I guess it would protect you from random failures.
RAID1 on SSD will have equal wear, it only protects against individual failure like @marcm mentioned. Problem is, price. Doubling SSD costs for no additional space isn't exactly fun, and with proper planning (MySQL replication, rsync static files), you could effective replace RAID10 with a pair of RAID1 2TB drives. Based on Intel 520s in SG, cost per GB is about US$1. Regular SATA drives are $0.12/GB.
End of the day, plan and test it out before deploying live. Knowing your limitations as well as the failure points will save you a lot of money as well as pave the way for prevention methods and disaster recovery in event of shit happening.