Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hardware vs Software Raid 10
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hardware vs Software Raid 10

Which one would you use? Which one do you use? which is better? Just ordered a new dedicated server and the company says they are unable to install Software Raid 10 and they will install Hardware Raid 10 (paying an extra money :/)

I heard Software is better on SSD disks (Im using HDD) but Hardware is better on HDD because it does not consume server resources.

Which raid?
  1. Which raid?43 votes
    1. Hardware RAID
      46.51%
    2. Software RAID
      53.49%

Comments

  • SpryServers_TabSpryServers_Tab Member, Host Rep
    edited March 2019

    It really all depends on the raid controller and your use case.

    Edit: Just FYI, you can set up softraid on your own. You don't need them to 'support' it.

    Thanked by 1Letzien
  • @SpryServers_Tab said:
    It really all depends on the raid controller and your use case.

    Edit: Just FYI, you can set up softraid on your own. You don't need them to 'support' it.

    It's LSI, still do not know the model tho.

    I know, I guess I would be able to install it buy I ordered a managed server, If I install it on my own the server would not longer be managed and I am still learning Management stuff.

    It is to host sites by the way.

  • LSI? Probably softraid (BIOS) anyhow. Use software RAID, it's a lot easier to recover, and unless your CPU is complete shit, it shouldn't have really obnoxious overhead.

    Thanked by 1SpryServers_Tab
  • SpryServers_TabSpryServers_Tab Member, Host Rep

    Don't use bios softraid/fakeraid though.

    Thanked by 2Chuck eol
  • MikeAMikeA Member, Patron Provider
    edited March 2019

    Linux sw raid has been fine for me and saved my ass a few times. Never used hw raid though.

    Thanked by 2netomx Shazan
  • ralphralph Member

    A con of hardware raid is in case of controller or server hw fail, it will be more difficult to recover the data unless there are spare hw parts available to replace. SW RAID is preferable.

  • I will have to go with hardware since company insists they are unable to build it and it would not be managed server then, I do not understand how a company does not know how to setup Soft Raid 10

  • MikeAMikeA Member, Patron Provider
    edited March 2019

    @desfire said:
    I will have to go with hardware since company insists they are unable to build it and it would not be managed server then, I do not understand how a company does not know how to setup Soft Raid 10

    Maybe their setup/installation is automated and it for some reason can't create software raid. Just install the OS yourself over IPMI/KVM, and you can use SW raid. The installer literally does it all.

  • You are the responsible party for setting up and maintaining the software side of things. This also includes software RAID.

  • Don't use Software RAID if your node has SSD

  • MikeAMikeA Member, Patron Provider

    @belemenon said:
    Don't use Software RAID if your node has SSD

    I have dozens of servers running software raid for SSD and NVMe, everything has been fine.

  • LeviLevi Member

    SSD, NVMe - software. SAS,HDD - hardware. Just easy as it is.

  • jackbjackb Member, Host Rep
    edited March 2019

    The only reason to use hardware raid on Linux imo are if you're using one with a cache and BBU. Anything else and software raid wins.

    Even still, my preference is software.

    @MikeA said:

    @belemenon said:
    Don't use Software RAID if your node has SSD

    I have dozens of servers running software raid for SSD and NVMe, everything has been fine.

    I presume he's referring to potential problems passing trim through mdam. Leaving 10% unallocated makes that a non issue, though I think these days latest mdadm does support trim.

    Thanked by 1MikeA
  • JarryJarry Member

    If you can use zfs/raidz, go for it! If not, I recommend hw-raid. I have been using both for years, and hw-raid is imho easier to set-up, monitor, expand, reconfigure, install, etc...

  • How much you pay for the server and managed service?

  • bacloudbacloud Member, Patron Provider

    Software RAID10 works better than hardware RAID10 with cheap RAID controller.

  • akhfaakhfa Member

    Saved once by software raid. Never use hwraid though

  • AnthonySmithAnthonySmith Member, Patron Provider

    The answer could change 10 times with 10 different questions, e.g.

    Which OS ? windows... oh you want hardware raid, ESX? yep, hardware it is! etc etc
    What are the SPECIFICS of the raid card, BBU, Cache etc etc?
    What raid level? 6/10/1/0 ????
    What size is the data volume in total?

  • nemnem Member, Host Rep

    @ralph said:
    A con of hardware raid is in case of controller or server hw fail, it will be more difficult to recover the data unless there are spare hw parts available to replace. SW RAID is preferable.

    I've had hardware raid fail, that being said as long as you're not doing stupid with forcing write-back caching without a BBU and it fails it'll be recoverable. Same to be said with RAID5 using 6 TB volumes. Russian roulette has higher survival odds.

    Software RAID is much easier to setup/administer and removes one component from the chain that could send a server down in flames. If you're on SSD/NVMe, not pegging out CPU, nor need an exotic nested configuration (RAID50, 60, 100), then stick with mdadm.

  • jsgjsg Member, Resident Benchmarker

    First, for SSD and NVMe software Raid is probably better. "probably" better because it boils down to the question whether a given hardware Raid controller knows how to deal with SSD specifics. With most controllers it seems likely the answer is "no".

    BUT: that question shouldn't arise at all anyway re Raid 1/0/10.

    Reason: What is Raid 0? It's basically to write half of the data to 2 drives each. And Raid 1 is to write the data double i.e. to two drives. Raid 10 obviously is the combination. All extremely cheap operations.
    Looking at those operations one will find that basically nothing, no speed improvement worth mentioning is to be gained by using Raid hardware.
    The same is largely true for Raid 5 (XORing is a quite cheap operation). Only for Raid 6 (or 60) Raid hardware really makes sense due to a much more expensive algorithm. For Raid 5 or 50 it only makes sense with a very heavy load (and then, getting generally better (CPU) or more (RAM) hardware is probably a smarter and cheaper choice).

    Plus, keep in mind the ugly side of all (afaik) raid hardware: it's proprietary and I mean the ugly version, ugly as in "even the same controller from the same company but with a slightly different version may fail to read the data after controller replacement".

    And yes, that's a real and concrete danger. One well known and much liked (for good reason) provider here with lots of experience just recently lost all data of a KVM node due to hardware Raid going berserk.

    Thanked by 2Shazan angstrom
  • PUSHR_VictorPUSHR_Victor Member, Host Rep

    @belemenon said:
    Don't use Software RAID if your node has SSD

    Wut?

  • JarryJarry Member

    @jsg said:
    ... "even the same controller from the same company but with a slightly different version may fail to read the data after controller replacement".

    Yes, it may (but must not) fail. My experience is different: hw-raid ibm m5016 died in my server (lsi2208-chip, iirc). I had backups at hand, so I was prepared to re-install everything. Grabbed some old Fujitsu hw-raid (do not even remember type, but it had lsi-chip too). Attached disks, powered on, and guess what! Booted to OS, without any problem. Arrays properly detected and working. I'd never believe, have I not seen it with my own eyes...

    lsi-support confirmed it is expected behavior: array-config is saved somewhere on disk, so if replacement controller has the same (or newer) SoC, arrays must be properly recognised. It does not matter what config, bios, driver, ports, replacement controller has.

    I also had to deal once with broken sw-raid (mdadm), when mobo died. Now this was real problem: First I booted to grub, but could not load kernel. Got panic all the time, because it did not have drivers for different hw compiled. I prepared kernel elsewhere, but still could not boot. Disks had different dev-names (it was before uuid became common), so only rootfs with raid1 could be re-constructed. I had to play with mdadm.conf and disk-ports. Yes, I got it working, but it took much longer time, than switching broken hw-raid controller...

    I'm still using both hw and sw-raid, but out of sw-raid only zfs/raidz (sometimes with cheap hw-raid cards in IT-mode, if I need more ports). That is really very good sw-raid implementation, I'd say better than most of enerprise-class hw-raid controllers...

  • jsgjsg Member, Resident Benchmarker

    @Jarry

    I believe you and think your story makes sense - but: Should we play that lottery? After all, there have been many cases where even one and the same controller model couldn't read its predecessors disks/data.

    My personal solution is to get n+1 Raid controllers from the same batch (if any possible) if I really need hardware Raid. Which is increasingly rarely the case. Re Raid 1,0,10 it's simply a waste of money and actually increasinging risk IMO.

Sign In or Register to comment.