Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


RAID0 SSDs
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

RAID0 SSDs

SimpleNodeSimpleNode Member
edited October 2012 in General

Have a few projects that require SSDs/Lots of IOPS.

What do you think about putting SSDs in RAID0? From what I know, it should be fairly safe as SSDs rarely fail, and I'll take daily backups.

«1

Comments

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    @SimpleNode said: fairly safe as SSDs rarely fail

    Who lied you so badly?

    Thanked by 1klikli
  • @Alex_LiquidHost said: Who lied you so badly?

    +1, not sure who has told you that.

  • I got almost 10 SSDs failed on me in last 6 months.

  • Well, I'd assume they die far less than regular disks.

    Oh well, I might as well not take the risk and just use RAID10.

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    I still wonder why the SSD's fail more often then HDD's, as they don't have any mechanical elements and on theory should be more reliable. RAID1 atleast is an absolute must. I'd say to go with RAID10 on VPS node, atleast to have more spare space per node and increase the performance.

  • SimpleNodeSimpleNode Member
    edited October 2012

    For my project, I'm trying to choose between redundancy and a f**kton of I/O.

    Then again, I don't know the difference in performance between RAID10 and 0.

  • Go with Intel SSDs @ RAID5 with a LSI 9266-8i, almost same performance as RAID0

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    I'd say go with RAID10.. It literally is RAID1 + RAID0, as long as you have a good hardawre raid card.

  • @Alex_LiquidHost said: I still wonder why the SSD's fail more often then HDD's, as they don't have any mechanical elements and on theory should be more reliable.

    The amount of a writes a NAND has before it dies on a SSD, especially consumer cheap MLC and now TLC are getting lower and lower as they do die shrinks. With that of course, the price falls.

  • @Alex_LiquidHost That brings me back to a post I made a while back: I'm not sure what cards Incero use.

  • @SimpleNode said: @Alex_LiquidHost That brings me back to a post I made a while back: I'm not sure what cards Incero use.

    Adaptec 2405 mostly. BBU ones are 5405 I think.

  • @serverian said: SSDs @ RAID5

    No wonder you have so many failed SSDs ;-)

  • SimpleNodeSimpleNode Member
    edited October 2012

    I already have a node with a H/W RAID Card from Incero, how would I go about finding it's model?

    lspci only shows this;

    [root@dysprosium ~]# lspci
    00:00.0 Host bridge: Intel Corporation Xeon E3-1200 Processor Family DRAM Controller (rev 09)
    00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09)
    00:01.1 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09)
    00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 05)
    00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05)
    00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5)
    00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5)
    00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05)
    00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5)
    00:1f.0 ISA bridge: Intel Corporation C202 Chipset Family LPC Controller (rev 05)
    00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05)
    00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05)
    02:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09)
    04:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
    05:03.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200eW WPCM450 (rev 0a)
    

    "Adaptec ACC-RAID" isn't that descriptive :P

  • KenshinKenshin Member
    edited October 2012

    Install arcconf.

    ]# arcconf getconfig 1
    Controllers found: 1

    Controller information

    Controller Status : Optimal
    Channel description : SAS/SATA
    Controller Model : Adaptec 2805

  • @Kenshin Yup, 2405.

  • Good card, I'm using the 2805 on most of my nodes since the cost of the card is probably a few tens of dollars more than the 2405.

    But remember it doesn't support SATA3, so your throughput is going to be stuck at SATA2 levels (~200MB/sec). You'll need the 6405 for SATA3.

  • Let's put it this way to close the ssd topic. There are no "enterprise" class 2.5 SSDs - only consumer grade. Think of your shitty WD's or Hitachi 5400 rpm sata drives that have no place in a server - that's what an SSD today is from a reliability perspective.

    If you are going to use SSD's they have a finite liftime- do it only in raid10, make sure you have spares handy, and monitor the heck out of them. From a performance/reliability perspective, intel seems to be the best in my experience.

    On a busy i/o intensive system, even without any type of drive failure you should consider ssd lifetime at a maximum 12-15months (rule of thumb.) if that's something you can't live with, go with tried & true 15k rpm sas drives.

  • @unused said: There are no "enterprise" class 2.5 SSDs - only consumer grade.

    What about these?

    http://h18000.www1.hp.com/products/quickspecs/14038_div/14038_div.HTML

  • @Damien - Ok, you got me. They've stuck them into a 2.5 inch tray. Wonder about the specifics on the 3 year warranty and what exclusions there are, though that'd certainly be the way to go.

    That said, the 800gb SSD there list is $9,179.99 for one drive. (ref: http://www.cdw.com/shop/products/HP-800GB-SAS-ME-2.5IN-EM-SSD/2851265.aspx ) and the 400gb is $4600 and some change.

  • jarjar Patron Provider, Top Host, Veteran

    @unused said: That said, the 800gb SSD there list is $9,179.99

    I guess if you're poor and can't afford a real SSD.
    http://www.provantage.com/ocz-technology-zd4rm88-fh-3-2t~7OCZD04J.htm

  • @SimpleNode RAID-0 using a hardware RAID controller is awesome, but software RAID-0 not so much regardless if the drives are SSD or not.

  • @marcm said: @SimpleNode RAID-0 using a hardware RAID controller is awesome, but software RAID-0 not so much regardless if the drives are SSD or not.

    And why is that? You'll get better performance out of software raid0.

  • Nick_ANick_A Member, Top Host, Host Rep

    @SimpleNode - what on earth are you doing that a good RAID10 setup would not be fast enough?...

  • @Nick_A said: @SimpleNode - what on earth are you doing that a good RAID10 setup would not be fast enough?...

    Especially since you can keep adding disks :)

  • @Nick_A said: @SimpleNode - what on earth are you doing that a good RAID10 setup would not be fast enough?...

    Video editing of lots and lots of porn... The people need it fast!

    @unused said: There are no "enterprise" class 2.5 SSDs

    Intel has the E series based on SLC chips, as well as the eMLC ones (710, 910) which have decent reviews. It really depends what you're doing with the SSD.

    Another thing regarding SSDs, failure usually happens because of (1) firmware or (2) write failure. My entire office runs on the Intel G2 SSDs, I had 1 die due to firmware bug, everything else is still working. Server wise, I had a pair of old generation ones die due to excessive writes but it was still readable. RAID0 is a decent option especially if you're running MySQL with replication. Otherwise for more static data, daily or hourly rsync to normal harddisks would be more than suffice. Performance + reliability in one package.

    End of the day, it's about knowing the pros/cons and deploying SSDs correctly will still result in a good deployment. I've had RAID1 arrays fail together before, so shit can happen on normal drives. Nothing is failure proof, but so far based on the number of SSDs I have around, pretty decent as long as backups are always readily available.

  • @serverian said: I got almost 10 SSDs failed on me in last 6 months.

    Out of how many?

  • If RAID1 was used, wouldn't both drives get the same wear, therefore dying at the same time?

  • Awmusic12635Awmusic12635 Member, Host Rep

    @SimpleNode Not exactly

  • @Corey said: And why is that? You'll get better performance out of software raid0.

    I am not stopping you to use it. Even RAID1 can benefit from a hardware RAID controller. Software RAID is horrific for any kind of RAID. Again, I am not contradicting you, so if you want to use software RAID then please do.

    @Kenshin said: My entire office runs on the Intel G2 SSDs

    Those are some of the best SSD drives that Intel has ever made.

    @SimpleNode said: If RAID1 was used, wouldn't both drives get the same wear, therefore dying at the same time?

    That's actually an interesting point when it comes to SSD drives. I guess it would protect you from random failures.

  • @SimpleNode said: If RAID1 was used, wouldn't both drives get the same wear, therefore dying at the same time?

    RAID1 on SSD will have equal wear, it only protects against individual failure like @marcm mentioned. Problem is, price. Doubling SSD costs for no additional space isn't exactly fun, and with proper planning (MySQL replication, rsync static files), you could effective replace RAID10 with a pair of RAID1 2TB drives. Based on Intel 520s in SG, cost per GB is about US$1. Regular SATA drives are $0.12/GB.

    End of the day, plan and test it out before deploying live. Knowing your limitations as well as the failure points will save you a lot of money as well as pave the way for prevention methods and disaster recovery in event of shit happening.

Sign In or Register to comment.