Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


software or hardware raid?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

software or hardware raid?

jaxyanjaxyan Member
edited December 2011 in General

further to my last discussion about buying ram.
i tends to ecc ram instead of the desktop ram.

but i have another headace now.

I google lots about raid10 either on software and hardware raid.
but still struggling on using which one...

any suggestion on this topic would be apprieciated ^^

«1

Comments

  • hardware RAID is worth it only if you have a good RAID card + BBU

    under GNU/Linux or BSD for RAID 1 or RAID 10 software RAID would be fine, performance will be almost the same with a modern CPU.

    Don't know about 'doz though

  • kylixkylix Member
    edited December 2011

    Switch off the cache of the harddisks if you use software-raid, though. Otherwise you will have data-loss if the system crashes or the energy goes off.

  • jhjh Member

    How many disks are you using?

  • It's a VPS node you are preparing? What specs will it be?

  • KuJoeKuJoe Member, Host Rep

    I've had nothing but bad luck with software RAID. Not sure if I was doing it wrong or not (not sure how you can do it wrong since CentOS doesn't provide much room for error). We only use hardware RAID from now on though, just easier to configure and the extra few dollars when you build the system isn't that much really.

  • A good raid card + a BBU is a few hundreds :/ (and if the card dies you have to find the exact same card whereas on sw raid you can put the drives in any machine and it will just work) Anyway what kind of problem did you have?

  • I'm curious too to hear more about KuJoe's problems with software RAID. I've used software RAID1 for years on local servers. Never had a problem, and yes I've had drives die :)

    I have those boxes on a UPS with a signalling cable, so they do an orderly shutdown after 10 minutes of no AC power.

  • KuJoeKuJoe Member, Host Rep

    Low write speeds were my primary issue. The best I've ever seen with a software RAID1 on my servers was 40MB/s (both SATA and SAS drives), whereas I was able to get 70MB/s with fakeRAID and 100MB/s with hardware RAID with the same drives.

    I built 2 servers with exactly the same hardware, on one I used hardware RAID and the other software RAID and the one with the software RAID performed horribly while also having much higher loads during testing (200% higher than the hardware RAID).

    I also used these same 2 servers and built one with fakeRAID and it still performed better than the softwareRAID and had lower loads.

    On our old storage server the fastest write speeds I could get with software RAID10 was 20MB/s and with hardware RAID10 it was 91MB/s (albeit we found out after replacing the whole server that the hardware RAID card was bad and replaced it under warranty so those speeds would have been much higher).

    Again, based on everything I've read about software and hardware RAID the only conclusion I have is that I am doing it wrong because my results don't reflect the rest of the world for some reason.

  • With the drives we use we usually see 120-130MB/s in RAID1. Better yet 2X 15K SAS2 Disks in RAID1 where talking 250/MBs or more.

  • I have to admit that we just do daily backups and say the heck with it.

  • KuJoeKuJoe Member, Host Rep
    edited December 2011

    @VMPort said: With the drives we use we usually see 120-130MB/s in RAID1. Better yet 2X 15K SAS2 Disks in RAID1 where talking 250/MBs or more.

    I guess it all depends on the drives. We have some 10k SAS drives that get about 230MB/s in RAID10 but then we have some 7.2k SATA drives that get 250MB/s in RAID5.

    It's also worth mentioning that the OS and kernel have a big part in the write speeds. The latest CentOS 5.7 kernel yields much slow writes than the latest CentOS 6.0 kernel while the OpenVZ kernel reduces the write speeds by about 1/4th-1/5th.

  • KuJoeKuJoe Member, Host Rep

    @drmike said: I have to admit that we just do daily backups and say the heck with it.

    Daily backups work just find for non-critical data.

  • InfinityInfinity Member, Host Rep

    Software RAID on a production box is a no-go zone.

  • @Infinity said: Software RAID on a production box is a no-go zone.

    Why not?

    A lot of good and pretty big providers with good linux / bsd admin use only sw raid though :)

  • InfinityInfinity Member, Host Rep

    It's not as failsafe in my opnion.

  • miTgiBmiTgiB Member
    edited December 2011

    @Infinity said: It's not as failsafe in my opnion.

    What? It is no difference, raid is raid, whether done in software or hardware. Well software linux raid, windows is like amex, you can leave home without it.

  • InfinityInfinity Member, Host Rep

    My opnion is not proved with any facts or anything, it's subconcious :P I just don't feel comfertable. IDK why lol.

  • Wouldn't software raid use up precious processor power/time?

  • @drmike said: precious processor power

    go run top on any of your machines, even the celery chips, you have nothing but spare CPU cycles

  • @miTgiB said: go run top on any of your machines, even the celery chips, you have nothing but spare CPU cycles

    Agreed as I run my servers very lightly but not everybody does so.

  • root       25299  7.6  0.0      0     0 ?        S<   Nov22 1649:17 [md2_raid5]
    

    That is a 14 disk raid6 using less than 8% of a possible 800% CPU

  • While it is true that the processor won't have much more work because of a software-RAID today, the bus (ie PCI) will be cumbered much more. This often leads to a slow down in i/o-interactions from time to time. Another disadvantage is, that you can't use the cache of the harddisks. That being said, I still just used software-RAID (for my private use) so far.

  • What is the difference for the cache of the hard disk when using hardware or software raid? If the power fails unexpectedly, unwritten cache contents (in the HDD internal cache) may be lost, nevermind raid, no raid, hardware, software, whatever.

  • kylixkylix Member
    edited December 2011

    @rds100 said: What is the difference for the cache of the hard disk when using hardware or software raid?

    The hardware-RAID controller combined with a BBU (or the newer ones without a BBU but flash-cache, ie Adaptec 5Z) saves the content in its cache. As soon as the power of the system is up again everything in its cache gets written to the harddisk. Instead of a UPS the BBU is directly connected to the hardware-RAID controller and thus not depending on the PSU, which might be broken because of overvoltage etc.

  • Yes, this works fine for preserving the contents cached in the RAID card. Not the contents cached in the HDD internal cache.

  • kylixkylix Member
    edited December 2011

    @rds100 said: Yes, this works fine for preserving the contents cached in the RAID card. Not the contents cached in the HDD internal cache.

    That depends on your set-up. If you either use the hardware-RAID controller with the BBU or the flash you disable the harddisk cache and use the hardware-RAID cache as the cache for writing.

    Or you use 3ware controllers (ie 9650 SE) that have a writing journal (called StorSave). They also save the content of the hardddisk's cache.

    Thanked by 1rds100
  • sleddogsleddog Member
    edited December 2011

    Putting the box on a UPS so it does an orderly shutdown on powerfail will help protect your data and HDDs integrity.

    Thanked by 1drmike
  • @sleddog said: Putting the box on a UPS so it does an orderly shutdown on powerfail will help protect your data and HDDs integrity.

    That is true. But it won't protect against PSU failures.

  • @kylix said: That is true. But it won't protect against PSU failures.

    Also true :)

    But supplying cleaner power to the PSU can help with PSU lifespan.

  • @kylix said: the bus (ie PCI) will be cumbered much more.

    How do you justify this? The same data is moving on the PCIe slot for software or hardware raid.

Sign In or Register to comment.