Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How many hosts are using RAID 1?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How many hosts are using RAID 1?

KairusKairus Member
edited November 2011 in General

Out of curiosity, for those that are interested in sharing, how many of you use RAID 1 for your VPSes? Or, those who have experience with it, do you believe that it's viable, assuming you are not loading the node with an insane amount of clients?

Is RAID 10 really worth it, in regards to the costs? Wouldn't it in some cases be cheaper to just go with RAID 1, and put less clients on that machine, and put the savings towards another node?

«13

Comments

  • @Kairus said: Wouldn't it in some cases be cheaper to just go with RAID 1, and put less clients on that machine, and put the savings towards another node?

    This is a valid question, but I do not think raid1 is a valid option for a host node at any time while providing VPS hosting. I/O is going to be the 1st bottleneck, even with RAID10, so why even try with RAID1. What you use to manage the system will also play a part in this choice, where I use SolusVM, it has a per node charge, so the more VPS I put on a node, they les it costs per VPS, while other management systems are per VPS, then smaller nodes would be a viable option.

    Oh, and I think it is obvious from my comments, I only use RAID10 on my VPS nodes (with the exception of the backup VPS offer, which uses RAID6)

  • Interesting, it definitely seems tough to break into the VPS market without having a large amount of money starting out, especially going w/ RAID 10 on every server. I wonder how some of these hosts do manage at the prices they offer (with few customer complaints).

  • KuJoeKuJoe Member, Host Rep
    edited November 2011

    We use RAID1 on all of our blade servers because they only support 2 drives. That being said, we are moving to rackmount servers and RAID10 because, while we are very happy with our current setup, we feel that 50-70MB/s write speeds are not acceptable these days.

    We will continue using our blades for the time being for some of our clients though. I know people see 70MB/s write speeds and open a ticket complaining about them but our goal is stability, not performance. With our current track record I feel we're doing pretty good (even our "full" servers are getting 50MB/s which is not bad at all IMO, now if we decided to oversell we'd see some problems).

    Thanked by 2Mon5t3r linuxthefish
  • kiloservekiloserve Member
    edited November 2011

    @Kairus said: Is RAID 10 really worth it

    I am sure there are quire a few LEB providers who use RAID1.

    Use what you can afford; there's no point in over-exerting yourself to get to RAID10 and not have enough money left to pay the bills.

    As long as you don't overload the node too much, you'll get average or a little below average disk results which may be good enough if your price is low enough.

    There seem to be the users who want 100MB/s+ speeds, those that want 50MB/s speeds and those who are ok with 30MB/s. RAID1 will typically fall in the 30-50MB/s range.

    And then you have those disk i/o benchmarks posted on LEB that kind of make you feel bad when it's stacked up against others; that will also make your offerings less attractive and may hurt your business if the numbers are too low.

    Truthfully, 50 MB/s is good enough for most LEB uses but the higher numbers are just prettier so people like to gauge by the numbers rather than practical usage.

    RAID10 is certainly the better choice but only if you can work it into your budget. Some clients don't care about disk I/O as long as it works ok but at the same time, there are quite a few that just look at the numbers.

    Thanked by 1Mon5t3r
  • @Kairus
    If your system gets quite loaded and the disk queues go through the roof a raid 1 setup will work much harder and you will have more disk failures. Just something to keep in mind.

  • Mon5t3rMon5t3r Member
    edited November 2011

    i agree @kiloserve :D

    edit: and @kujoe :P

    until now i'm feeling very bad offering our NL Node here in LEB/LET, but i really didn't want to use 4TB disk space with 16GB of ram if i choose raid10 (4 x 2TB). but anything bellow 20MB/s is not good thats why we are trying to keep the performance by not overselling the resource.

  • jhjh Member

    RAID10 is definitely worth it for larger servers, hardware permitting.

  • That's an interesting question.

    Would RAID 10 really give twice I/O?
    Anyway, how many (not too heavy) customers can fit on a RAID 1? And on a RAID 10?

    Providers, from your experience/testing, what did you conclude?

    THx :)

  • Its not about how many you can fit on, its about whats been used and when. If you had a node full of users that rarely even touch the box you could for instance fit on 100 where as if they were all heavy users it would be 50. Those are not actual figures though just an example of how it varies depending on the users.

    As a general rule we dont go on numbers, we just check the performance at peak times daily, to make sure things are running smoothly.

    So overall, its a question of resource usage and not how many containers you can put on such and such hardware. Monitor your servers closely and my advice is if at peak times your touching max, stop putting containers on even if resources are underutilized through the day, the peak times are what you should base things on.

  • Ash_HawkridgeAsh_Hawkridge Member
    edited November 2011

    The disks play there part as well. I dont mind admitting one of our smaller UK node's uses RAID1. But with 2X 15 SAS2 disks, DD results are in the 180MB/200MB zone which is no different to the results on 4 Disk RAID10 configs from what i see.

    You just have to put a bit more money in for the 15K SAS2 Disks :P

  • bobinfobobinfo Member
    edited November 2011

    Sure, it depends of users activity, but I guess there are some average numbers?

    Couldn't anybody give average numbers so that we can have an idea?

    Anyway, do you have an automated system to check I/O VMPort or do you 'dd if..' on each of your nodes everyday? :)

  • There's also (I think) a statistically higher chance of catastrophic failure with RAID 1 compared to RAID 10.

  • KuJoeKuJoe Member, Host Rep

    @VMPort has a good point, we use RAID1 on all of our blades but we're also using SAS drives, we're going to be moving a handful of the same drives to our new servers and running them in RAID10 so we'll report back the performance increase for the exact same drives in new hardware (more and better CPUs, same RAM, and better hardware RAID).

  • We use RAID10 on all of our nodes and we see quite a noticeable performance gain. Until recently the costs of hard disks have been very low so unless you are renting the servers from someone else I don't think there has been a a great deal of difference in cost between RAID1 and RAID10 as most hardware RAID controllers we have looked at (Mainly Adaptec) support RAID1 and RAID10 even on the basic models.

    However with the recent floods in Thailand and the reduced hard disk manufacturing the cost of implementing a RAID10 setup is now very expensive compared to a RAID1 setup. We have seen prices quadruple so the same disks we were buying a month ago are now just over £200 expecting to rise to about £350 by the time manufacturing restarts.

  • Sure. For those who need drives, it's better to hurry up, it's still possible to find some interesting prices on ebay...

    Well it will depends if it restarts in 2 month or in 6... :-)

  • @sleddog

    I cant see any logic behind that since RAID10 is just a combination of RAID1 mirroring and RAID0 striping. Contrary to some people's believes it is possible that any level of RAID can become corrupt.

    The way i think of it is if you buy 4 drives new at the same time and stick them all in a RAID config and they die out due to old age, they will usually all start packing in at the same time.

    @KuJoe
    Are you using 15K as well? Be interesting to see some comparison, give me a moment and i will do a quick DD test see what the machine i was talking about is running at the moment.

    @bobinfo
    Nah no automation, unfortunately. I just have a VPS on each node that we have, some of them in use for stuff such as hosting the VMPort forum and some of them just sat idle, so i login to each maybe once a week or less if we have had a lot of orders at once and perform the usual checks, cachefly speed test/DD/iops

  • "The way i think of it is if you buy 4 drives new at the same time and stick them all in a RAID config and they die out due to old age, they will usually all start packing in at the same time."

    That's why even if you have 4 drives in your raid array, if one fails you have to hurry up to change it :)

    For I/O, ok, thanks.

    I'd assume that there should be a way to do that automated and showing up in some kind or mrtg graph - at the same time it wouldn't make a lot of sense to dd the machine everyday or few times a day and it would just be painfull - why use the I/O of the machine for something unneeded.

    Isn't there any good way to monitor I/O of a node without having to go through the dd test?

  • Sure it makes sense. Im just doing what any other client/LEB user would be doing, why automate such a thing when i can experience exactly what my clients are.

    I could easily monitor I/O within SolusVM, i just choose not too.

  • The proof is in the pudding. 2X 15KSAS2 Disks in RAID1, 35 VPS servers on there and here are the results..

    root@vmport:~# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.98297 s, 270 MB/s

  • bobinfobobinfo Member
    edited November 2011

    It makes sense to do it manualy from time to time, but would it makes sense to do that every 5 minutes (typical mrtg) or every few hours automatically. I don't think so it would add an unnecessary load imho.

    At the same time that load wouldn't last long & be a good pointer to manage well the nodes...

    Impressive results on your node though :-)

  • @VMPort said: I cant see any logic behind that since RAID10 is just a combination of RAID1 mirroring and RAID0 striping.

    Say we have a two-disk RAID1 and a 4-disk RAID10...

    A drive fails. No problem for either array.

    A second drive fails (before the first failed one is replaced).

    For the RAID1 this is catastrophic.

    For the RAID10, there's a 50% chance that it's catastrophic.

    I could be wrong on this :)

  • netomxnetomx Moderator, Veteran

    Kazila:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    16384+0 records in

    16384+0 records out

    1073741824 bytes (1.1 GB) copied, 4.22807 seconds, 254 MB/s



    Cripperz:

    5 minutes and it hasn't even finished....
    But the price difference is huge.... so, no worries

  • Oh no i only do it once a week per node just a quick test is all :)

    Thanks, that result actually suprised me, dealing with it even better than i thought.

  • netomxnetomx Moderator, Veteran

    Update: I stopped the script:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    12302+0 records in
    12302+0 records out
    806223872 bytes (806 MB) copied, 370.102 seconds, 2.2 MB/s

  • @sleddog

    Your right, ignore me. Too many hours not enough coffee.

  • @netomx said: Kazila

    Xen PV, right?

    @netomx: that's the cheap yearly plan on RAID 1?

    I wonder how many customers he did put on his servers :D

  • netomxnetomx Moderator, Veteran

    @bobinfo Yes, and about the yearly plan, I dont know if it is on RAID 1, but with that speed.... i just use it for backups.

  • @netomx Ok, thanks :)

    @VMPort, well I guess that once a week is enough if the nodes are not over crowded anyway even if a user is pretty heavy.

    Do you put all heavy users on the same node or some here some there?

  • We put `em wherever there is room :P

    Thanked by 1JustinBoshoff
Sign In or Register to comment.