Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hardware RAID vs Linux Software RAID
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hardware RAID vs Linux Software RAID

KenshinKenshin Member
edited April 2012 in General

Just trying to get some feedback from anyone with experience with hardware vs software raid especially on an OpenVZ platform. I'm quite sure MB/sec is going to be pretty close between the two, but in terms of overall performance and maintenance? I'm looking at 6-8 disk RAID10 for comparison.

The last time I had a heavy-use server on software RAID1 + OpenVZ, rebuilding was horrible and took days, not to mention overall performance took a completely nosedive. Server wasn't overloaded, but filled just nice so there was comfortable performance margin if not for the disk resync.

Comments

  • JacobJacob Member

    If you are putting that many drives in and are using RAID 10, Then I would use hardware raid. mdadm is decent for raid 1.

    Also when using RAID 1 You should use WD Blacks instead of RE4 Drives as performance will be roughly the same but doing this will save you some bucks.

  • CloudxtnyHostCloudxtnyHost Member, Host Rep

    If your using raid 10 you should also try and ensure the raid card has battery backup to avoid any problems with data sync should your server go offline.

  • @Kenshin said: I'm looking at 6-8 disk RAID10 for comparison.

    If you're spending that kind of cash on a new disk array, you might as well drop the $400 to $600 for a good hardware RAID card.

  • It really depends on what you're trying to accomplish. Software raid is slow as hell and puts unnecessary overhead on resources that should be free for other things. Hardware raid is fine, until it fails and you don't have an identical replacement card. On an undersold OpenVZ node, you might be going overboard... but I'm probably the only one on this forum who will tell you that.

  • @subigo said: It really depends on what you're trying to accomplish.

    Absolutely. I have OpenVZ with software raid 1 on a LAN server with a dozen local users, doing stuff like backing up their windows boxes and file-sharing. Disk IO just isn't an issue overall.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    RAID10 runs pretty good, but the cache does help if you're really pushing it.

    Stay away from RAID5/6 in software, it's a sack of crap on busy boxes.

    Francisco

  • Noted, my OpenVZ boxes in deployment are currently running Adaptec 2805 on 6 drives. I'm setting up a KVM box for testing (then later production) and was wondering if software raid 10 would be sufficient since I'll probably be having less VMs. The raid card costs probably around US$280 here in Singapore.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    You don't want to use software RAID on KVM normally since the performance really tanks once inside a VM.

    Francisco

    Thanked by 2Erkan adly
  • Will do, thanks for the heads up. Still waiting on supplier if I can get my hands on the 5805 for a good price instead.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Kenshin said: Will do, thanks for the heads up. Still waiting on supplier if I can get my hands on the 5805 for a good price instead.

    5805 aren't in production anymore I don't think :( 6805's are nice but you'd be looking at close to $600 in the US.

    Francisco

  • KuJoeKuJoe Member, Host Rep

    We only use HW RAID, we'll never use SW RAID again. For an extra $200 it's worth adding HW RAID to our servers with a battery backup.

  • HW Raid all the way. I've tried soft raid and never liked it. Plus I work too close to LSI to say I don't use their products, might get some nasty looks lol.

  • DerekDerek Member

    We had a huge debate about this on the IRC channel a while ago and some will disagree. Software raid is better then hardware raid. In software raid, you can dedicate ram to be used as a cache. It all varies on what software you are using to build the array. And if you are not afraid of CLI, and formilar with the software, software raid will be a breeze.

    The only issue with software raid is when the CPU is being hit hard with applications and that's when hardware raid becomes handy.

    Thanked by 1TheHackBox
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @Derek said: In software raid, you can dedicate ram to be used as a cache

    True, kinda. MDADM RAID5/6 supports a caching param but it's < 64MB I think? MDADM RAID10 doesn't support it at all :(

  • prometeusprometeus Member, Host Rep

    Raid 10 with md run fine until you need to sync (for whatever reason) disks, if this happen you need cpu available power and the patience of your users :)
    I've a xen test server with 4 disks in software raid 10 with less than 20 vm running fine

  • quirkyquarkquirkyquark Member
    edited April 2012

    @Derek said: The only issue with software raid is when the CPU is being hit hard with applications

    A distinction has to be made here between RAID {0,1}, where the CPU is mostly handling admin stuff, and {RAID 5,6} where it has to make the heavy duty XOR computations for redundancy. SW RAID 5/6 may be good enough for a home desktop, but you should expect a virtualization node's CPU to be "hit hard" simply because it is hosting tens of VZs.

    The XOR calcs are such that regular CPUs will be brought to their knees if trying to maintain high throughput (200 MB/s +); the dedicated chip(s) on a HW RAID card are much better suited for this. God forbid ever having to rebuild a SW RAID 5/6/50 array :)

    I'm happy running SSDs in RAID0, and a pair of 1TB WD Blacks in RAID1 using Intel fakeraid on my workstation, but backups and media go on a server with an Areca 1880 + HP SAS expander + n 2TB Hitachis. 512 MB of cache is good enough for my needs, but you can get versions with a DDR2/3 slot that will take a 2GB/4GB ECC UDIMM for crazy performance. If you are hard up or starting out, used Dell Perc5/6s go for pretty cheap on eBay.

    Off-topic, but the really nice bit is that the above server and workstation have a 10+ Gbps interconnect I made for about $200 with Infiniband (it's theoretically capable of twice that but the server's PCI Express slots are 1.0). :D

  • @quirkyquark do the Perc5 / Perc6 cards work on non-dell servers and with non-dell branded HDDs?

  • quirkyquarkquirkyquark Member
    edited April 2012

    @rds100 said: do the Perc5 / Perc6 cards work on non-dell servers and with non-dell branded HDDs?

    Yes, in fact most of the time you can cross-flash to the equivalent LSI firmware without issues. The only slight issue is that they often come in the blade/tray form factor, so you may need to get your own PCI bracket :)

    Edit: This thread is the definitive starting point for all white-box use of Perc 5/6 cards. I used to use a Perc 5 with 4x500 GB drives in RAID 10 on an Intel P45 chipset (and older) with no issues. They do use the wide SAS connectors though, not mini-SAS, so cabling can be messy.

    Thanked by 1rds100
  • KuJoeKuJoe Member, Host Rep

    Based on our tests, a PERC6 gives at least a 100MB/s write speed difference so it's worth the extra $100. The PERC5 uses (laptop?) RAM for cache though so you can upgrade easy and cheap. Just throwing that out there.

  • @KuJoe said: PERC5 uses (laptop?) RAM

    DDR400 desktop, ECC, registered IIRC.

  • DamianDamian Member
    edited April 2012

    @rds100 said: and with non-dell branded HDDs?

    PERC5 yes. PERC6 has some issues:

    The PERC 6 family of Serial Attached SCSI (SAS) 
    RAID controllers supports SAS devices and Dell-qualified Serial ATA (SATA) devices
    

    I don't know if it actually refuses to work with non-Dell drives. The PERC H700 in one of our nodes will absolutely not work with non-Dell drives. Quite annoying, since the drives are $500 apiece from Dell. Interestingly, that controller is identified by lspci as:

    01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 05)
    

    ...makes me wonder if it can be flashed to standard LSI firmware. Not going to try it though.

  • comXyzcomXyz Member
    edited August 2021

    @Francisco said:
    You don't want to use software RAID on KVM normally since the performance really tanks once inside a VM.

    Francisco

    Is it still the case @Francisco ?
    I'm going to test ZFS Raid 1 on Proxmox with 2x normal HDD and create some KVM VMs for file services, hopefully it won't be too slow

  • Maybe create a new thread instead of resurrecting one more than 9 years old?

    Thanked by 2TimboJones adly
  • @tetech said:
    Maybe create a new thread instead of resurrecting one more than 9 years old?

    Was doing research and found the thread, and I don't think a new thread is necessary in this case.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @comXyz said:

    @Francisco said:
    You don't want to use software RAID on KVM normally since the performance really tanks once inside a VM.

    Francisco

    Is it still the case @Francisco ?
    I'm going to test ZFS Raid 1 on Proxmox with 2x normal HDD and create some KVM VMs for file services, hopefully it won't be too slow

    On spindles? probably. For SSD's/NVMEs? You want software for that. Your raid card will get flooded out if you have the cache enabled and at that point you're just paying for a very expensive HBA.

    Francisco

    Thanked by 1comXyz
  • @comXyz said:

    @tetech said:
    Maybe create a new thread instead of resurrecting one more than 9 years old?

    Was doing research and found the thread, and I don't think a new thread is necessary in this case.

    You'd be wrong. This is so cold, others have been banned for cold posting far less years.

Sign In or Register to comment.