Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hardware RAID vs Software RAID on SSDs
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hardware RAID vs Software RAID on SSDs

I am wondering if anyone has done any benchmarks or tests regarding the performance difference of 2 SSD RAID1 on Software RAID vs Hardware RAID.

In my case the setup will be used for a website doing heavy IO read/write(images) with a big DB.

«1

Comments

  • gestiondbigestiondbi Member, Patron Provider

    Hard!!!

  • MaouniqueMaounique Host Rep, Veteran

    The difference will be the cache for writing. On the other hand, some raid cards introduce speed issues rather than solving them, we are way past the point where the cpu was important in raid setups, raid 1 has no calculation of parity anyway, so, unless you like to learn something and test various scenarios, dont even think of a raid controller for raid1 ssd.

  • RAID1? Absolutely no point to introduce a HW controller

  • MicrolinuxMicrolinux Member
    edited January 2015

    SSDs in RAID 10 are so ridiculously fast the only benefit to hardware RAID may be BBU cache. Other than that, you'll get more from your money by throwing it into a fire.

    Edit: I can't read, it's RAID 1, which a hardware RAID will still be largely useless for.

    Thanked by 1mpkossen
  • Don't you guyz think Raid5 (ofcourse HWRaid) within 4xSSDs be a better contender for massive SQL load, plus the redundancy it will provide ? or will Raid 10 be faster anyways?

  • MaouniqueMaounique Host Rep, Veteran
    edited January 2015

    mehargags said: Don't you guyz think Raid5 (ofcourse HWRaid) within 4xSSDs be a better contender for massive SQL load, plus the redundancy it will provide ? or will Raid 10 be faster anyways?

    Depends of the scenario. not all databases are equal.
    1. First, try to cache in ram as much as you can, the whole database if possible.
    2. Second, if many small writes and little reads, a write-back cache with a BBU will help and a raid 5 will need it. If a lot of reads and little writes, you can do Raid 5 sw with no difference in speed.
    In general, keep your indexing in order, cache the tables or make temporary ones for frequent hits in ram, cache the query result if needed many times soon after getting it, etc.
    optimizing databases is an art and there are no predefined paths, only some general suggestions, because there are no 2 databases alike and no 2 loads the same, even if the dbs themselves are identical.

  • MicrolinuxMicrolinux Member
    edited January 2015

    @mehargags said:
    Don't you guyz think Raid5 (ofcourse HWRaid) within 4xSSDs be a better contender for massive SQL load, plus the redundancy it will provide ? or will Raid 10 be faster anyways?

    RAID 5 will have slower writes. The bigger picture is that database loads requiring more IOPs than a single SSD can provide are exceedingly rare. You're talking about some sort of heavy-hitting enterprise load that is unlikely to be homed to single system, anyhow.

  • mehargagsmehargags Member
    edited January 2015

    @Microlinux said:
    database loads requiring more IOPs than a single SSD can provide are exceedingly rare

    So you mean, the I/O speeds Entp. SSDs provide are way above an averagely big database can thrash? In that case, I guess RAID only has one role to play-> Redundancy, so soft Raid1 should be ideal!

  • MicrolinuxMicrolinux Member
    edited January 2015

    @mehargags said:
    So you mean, the I/O speeds Entp. SSDs provide are way above an averagely big database can thrash?

    Yes.

    @mehargags said:
    In that case, I guess RAID only has one role to play-> Redundancy, so soft Raid1 should be ideal!

    For 99% of database loads, yes.

    Your definition of "massive" database load is very likely overblown unless you have some extremely specialized application, in which case you probably would not be asking for advice on a random Internet forum.

    Thanked by 1mehargags
  • mehargagsmehargags Member
    edited January 2015

    @Microlinux said:
    Your definition of "massive" database load is very likely overblown

    ofcourse I do understand there isn't actually a scale to the word "massive" for a DB, it is purely subjective to the application. That's why I used the word "averagely big" in my second post. Thanks Anyways... !

  • @jimaek

    From what little you specified ...

    • The secret sauce is RAM
    • What is "heavy IO (images"? large or small? reasonably cachable or wildly random? How many/sec and what sizes each?
    • I take the approach "SSD and RAID1" to target "I need speeeeed and fassst IO but want to be on the safe side"?
    • Is the database related? What are we talking about here? Complex joined queries, simple queries, thousands/s or billions/s, range of and typical size of result sets?

    Oh, and: The DB is MySql, I guess, and the front-end is driven by PHP?

    Thanked by 1mehargags
  • Don't run SSDs in RAID5 - Neither HW nor SW.

    First the parity calculation will either use MASSIVE CPU power or will be slow due to lack thereof.

    Second it will cause very heavy usage of the SSDs and wears them out faster.

    Thanked by 1mehargags
  • Huh? RAID5 is simple. Not exactly a power hungry operation (Or did you mean RAID6, Gallois (which, indeed, isn't cheap)).
    And look: For any given amount X of data, on RAID1 they're written on both disks, always. So, each SSD has the full IO and data vol.
    That same amount X, written on RAID5 with (assumed 4) disks, will have about 1/3 of load per disk. So, wear out is lower (unfortunately, so is speed).

    Also, by far not every hardware RAID is fast enough to not become a bottleneck for SSDs with RAID1,0,10.

    For DB, as a general rule of thumb, performance is in RAM, not in disk controller or RAID. And even that only comes to play a role with seriously complex and/or massive DB.
    For the average MySQL backend for some PHP site, those worries are beyond reasonable.

    Thanked by 1vimalware
  • @William said:
    Don't run SSDs in RAID5 - Neither HW nor SW.

    Seems like RAID10 is the defacto for SSD Era... right ?

  • @William said:
    Don't run SSDs in RAID5 - Neither HW nor SW.

    First the parity calculation will either use MASSIVE CPU power or will be slow due to lack thereof.

    Second it will cause very heavy usage of the SSDs and wears them out faster.

    This is not true, take a look at ZFS raidz. With copy-on-write you will not have any problems with massive writes. It does a perfect wearleveling because all cells will be written.

  • Not true; i had 480GB Corsair and OCZ SSDs fail much faster in RAIDZ1 than in ZFS Mirror configuration.

  • fileMEDIAfileMEDIA Member
    edited January 2015

    No, take a look at Solaris documentation and what copy-on-write does. Better than any hardware raid 5/10 because it use all cells with the same ratio. This is why ZFS not have any write holes in raid zX setup. -> https://blogs.oracle.com/bonwick/entry/raid_z

    -> https://storagegaga.wordpress.com/2011/08/22/copy-on-write-and-ssds-a-better-match-than-other-file-systems/

    We have much more failes on SSDs in hw raid setups because they cannot do trim. Most of the controlles do not support trim and do not use copy-on-write.

    Thanked by 1vimalware
  • Modern SSDs already have built in wear leveling so this shouldn't matter.

  • I'm using ZFS on Linux and FreeBSD - Not Solaris. I also don't doubt it is better than HW RAID.

  • fileMEDIAfileMEDIA Member
    edited January 2015

    Look at the links i added to my post. ZOL isn't the right thing, to much problems. The bug tracker is full and should not use in production.

    @rds100 said:
    Modern SSDs already have built in wear leveling so this shouldn't matter.

    Wear leveling only works good if you can pass trim. Most of the controllers do not pass it.

  • IkoulaIkoula Member, Host Rep

    Hello,

    I dont have any stats and i did not ran any tests so i might be wrong but according to me HW raid will always beat SW raid since SW raid needs ressources and system process HW raid has natively on its controler aside the OS.

    I read this article it says it depends on the usage + usefull links at the end
    http://www.cyberciti.biz/tips/raid-hardware-vs-raid-software.html

    Regarding my own experience if you are looking for performance to a heavy load response go with HW raid.

  • fileMEDIAfileMEDIA Member
    edited January 2015

    Ikoula said: I dont have any stats and i did not ran any tests so i might be wrong but according to me HW raid will always beat SW raid since SW raid needs ressources and system process HW raid has natively on its controler aside the OS.

    Not really with common types like raidzX on ZFS and other filesystems. For example a storage head from one of our DedifyStack storage systems. Handle around 15TB useable storage for many many instances. It use raidz3 in a striped vol like raid60 does and it do not require such amount of power. We also have enabled compress with lz4 to remove space on empty volumes and will be exported over zvols to the xenserver. It use only a dual L5520..

    All modern cpus have enough power for typical raid types. You do not need any kind of raid controller if you have a proper filesystem. For example, netapp storage systems do not use any raid controller for storage disks, they handle all in software.

    Real example: ZFS raidz3, compression lz4, export over comstar as zvol and 15TB usable storage.

  • IkoulaIkoula Member, Host Rep

    Sorry i thought we were talking about local storage for one physical server.

  • It's the same and do not have the overhead for comstar and other tasks.

  • MaouniqueMaounique Host Rep, Veteran

    @Ikoula said:
    Hello,

    I dont have any stats and i did not ran any tests so i might be wrong but according to me HW raid will always beat SW raid since SW raid needs ressources and system process HW raid has natively on its controler aside the OS.

    I read this article it says it depends on the usage + usefull links at the end
    http://www.cyberciti.biz/tips/raid-hardware-vs-raid-software.html

    Regarding my own experience if you are looking for performance to a heavy load response go with HW raid.

    That might have been the case 10 years ago.
    Today CPUs AND internal buses have enough capacity to handle reasonable raid computation and BW even under heavy load.
    If you need "unreasonable" speed and have raid 6 with weird parity schemes and all, the first thing a raid controller will help you with is the superfast and specially designed cache backed by battery for write-back. The specialized chips to compute parity will help too, but only as a second line.
    If we are to use any kind of ssd without parity computation (raid 0, 1, 10 and the like) even the influence of the cache on the controller is minimal as the drives have own cache and if we suppose they are the same make and type, will act the same.
    What a controller can help further, are failure cases, for example one drive starts to experience problems and becomes way slower, the controller has the ability to detect it faster and cut it out to prevent problems and slowness. On the other hand, a proprietary raid might not behave well if the drives are transplanted in another due to controller failure, while a sw raid will not care.

    Thanked by 1vimalware
  • IkoulaIkoula Member, Host Rep

    @fileMEDIA
    @Maounique
    Thanks for the knowledge update guys !

  • @davidgestiondbi said:
    Hard!!!

    RAID controllers are a single point of failure, so unless you need the BBU to save data in case of a crash, I'd always pick software RAID.

  • This is really interesting discussion.
    I always thought HW Raid are better than SW RAID.

    So from what you guys are saying
    2 Intel S3500 SSD in Software RAID 1
    and
    2 Intel S3500 SSD with HW RAID 1 + BBU + Fastpath (LSI card)

    Will not have any difference in performance ? (or little to no difference?)

    (Let's leave price out of the comparison here. I am purely asking it for my own case, not for general scenario so giving SSD spec + Raid Card)

  • @Umair said:
    This is really interesting discussion.
    I always thought HW Raid are better than SW RAID.

    So from what you guys are saying
    2 Intel S3500 SSD in Software RAID 1
    and
    2 Intel S3500 SSD with HW RAID 1 + BBU + Fastpath (LSI card)

    Will not have any difference in performance ? (or little to no difference?)

    (Let's leave price out of the comparison here. I am purely asking it for my own case, not for general scenario so giving SSD spec + Raid Card)

    With those BBU + Fastpath than I pick the HW raid at anytime :)

    Thanked by 1vimalware
  • MaouniqueMaounique Host Rep, Veteran

    The hw will have the advantage of the write-back cache. You can enable write-back in a software raid, but if the power fails you are at a very high risk.
    if you have few or big writes versus small and frequent ones, the cache will not matter much in the first scenario but will in the second.
    you have a SPOF in the raid controller case, while if the sw raid drives are on different identical controllers, you do not. However, sata controllers onboard are very unlikely to fail by themselves without the MB as a whole.
    If you have a good and relatively modern raid card which supports well ssd, with plenty of cache and a bbu, you may get more performance in some scenarios with a raid controller, but in most cases a SW raid 1 is more desirable.

    Thanked by 3edan vimalware Umair
Sign In or Register to comment.