Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Don't build SSD in RAID5 - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Don't build SSD in RAID5

2»

Comments

  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    linuxthefish said: Sorry if this is off topic, but @Maounique won't one SAN thingy for a load of nodes failing be worse than the local storage in one node failing?

    Yes, it would, however, our older SAN has active-active dual controllers, a lot of built-in redundancy and is under warranty by toshiba for 99.99 uptime and 0 dataloss if operated correctly (and they are the only ones to operate, including installing disks and only some types of disks). It was 150k.
    Our newer one has everything included, even 24 hour UPSes, has huge redundancines, is guranteed 100% uptime (even firmware update wont take it down) and it costs 1 mil.

    Yeah, it may fail, but I believe the chance for that to happen is much less than all the nodes storage to fail in a given time. We offer free and paid backup options, including snapshots, stored in a diifferent storage system. If it fails and people have no back-ups, that wont be our problem, we did everything we could, including huge costly investments.

    dragon2611 said: SSD prices are falling, capacities are increasing

    Yeah, but this often happens at the expense of reliability. We are going for the MLC ones only, for now. They are expensive and not so big, but much more reliable and fast, IMO.

    Thanked by 1linuxthefish
  • Let me make a simple calculation.

    I need, say, 1TB net. A 250GB SSD is 100$, a 500GB one is 200$ (assumed). As a hoster I need, say, 100 sets.

    With RAID5 I'll use 5 250GB disks to get 1 TB net. -> 5 x 100$ -> 500$
    With RAID1 I'll use 4 500GB disks to get 1 TB net. -> 4 x 200$ -> 800$

    Times 100 (for 100 systems) -> (500 disks) 50.000$ and (400 disks) 80.000$

    Let's assume the failure rate is 5%/year, i.e. 5 disks in 100 disks will go belly up each year. So in my RAID5 setup I'll have to replace 25 disks/year (2.500$) and in my RAID1 setup it's 20 disks (4.000$).
    Let's stupidly assume, prices don't change over time (which is OK, because if they change that will be reflected in both setups) and let's assume I calculate my solution for a lifetime of 4 years (a typical bookkeeping lifetime).

    So, all in all, the RAID5 solution will be 50k$ + 3 x 2.500$ -> 57.5 k$. The RAID1 solution will be 80k$ + 3 x 4.000$ -> 92 k$.

    But you say, wear out will be higher in RAID5. OK, let's increase the yearly failure/replacement rate at 15% then for RAID5. Which makes 50k + 3 x 7.5k -> 72.5 k$

    Turning that into systems I'd arrive at 100 RAID1 systems of 1 TB net for 4 years for 92 k$. Or at 126 RAID5 systems of 1 TB net for 4 years, also at 92 k$.

    Or, know what, let's go amok and assume that RAID5 SSD fail 5 times as often as RAID1 systems, OK. Then we arrive at 87.5 k$. Still cheaper than RAID1.

    I think we can agree that for one and the same amount getting more RAID5 systems vs RAID1 systems systems, we'd chose the solution that brings us more systems to earn money with (or that is cheaper), wouldn't we?

    Thanked by 1vimalware
  • MaouniqueMaounique Host Rep, Veteran
    edited March 2015

    TBH, the original article is not looking very legit.
    Anyway, it is not compared with raid 10 where the writing is more, as the system keeps 2 copies instead of a calculated parity.
    So, I do not understand the issue. More frequent small rights are better than larger writes at once? Overall the writing cycles are more with raid 10 in average per cell.

Sign In or Register to comment.