Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


31mb/sec in the DD 'test': acceptable to you? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

31mb/sec in the DD 'test': acceptable to you?

13»

Comments

  • @SimpleNode said: can't fit any more into a 1U server.

    1U 8bay 2.5"
    image

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2012

    Doesnt beat 24 in 2U :)

  • serverbearserverbear Member
    edited November 2012

    Seeing a lot of IO tests in the 30 Mb/s range in the last day (so many hosts seem to find this number acceptable), you've started a trend Damian. Low IO is the new hipster craze.

    image

  • @Maounique said: Doesnt beat 24 in 2U :)

    Either you linked to the wrong case, but that isn't 24 bay in 2U, that is 4 nodes with 6 bays each. That case is available as a single server case, for like $1500,and 2.5" enterprise drives are 40-60% higher, no startup is going to use that option, hell I won't use that option, but I have gone to something higher density, a 12 bay 2U and I velcro 2 SSD inside the case ;)

  • Yeah those are rather exotic server chasis.

    2.5" drives, hmmm. Folks are using them ehh? Have to with those units.

    Don't mind me, I think these designs on servers are a total mess and wrong. Been bitching about these designs for years.

    Half or quarter length cases. Why don't they make the case full length and have hot swaps out front and internal storage bays inside? Could fit way more drives in a unit of space.

    Not many places in colo biz are back to back racking these, meaning people are paying for 1U of space and using only half of the depth :(

    Figured VPS folks have to be racking 2U and larger cases to get any real drive space. Or renting servers that are tower configs.

  • @pubcrawler said: Not many places in colo biz are back to back racking these, meaning people are paying for 1U of space and using only half of the depth :(

    Power/heat

  • @miTgiB, you are my hero. I thought I was the only one velcroing and improvising drives into units :)

    I shipped a 1U with a bunch of 1.8 SSDs in it a while back and got read the riot routine by traditional idea folks. It was do that or send two servers - one with spinning drives and one with SSDs.

    Think the board I was working with hard 6 SATA channels on it but only 2 drive bays :) Funny what some companies ship.

  • @miTgiB right on about the back to back rack heat issues. Depends on how they run their cooling and aisles. These servers were designed to be back to back racked :)

  • @pubcrawler said: @miTgiB, you are my hero. I thought I was the only one velcroing and improvising drives into units :)

    The only way to save costs :)

  • MaouniqueMaounique Host Rep, Veteran

    @miTgiB said: Either you linked to the wrong case, but that isn't 24 bay in 2U, that is 4 nodes with 6 bays each.

    It was the drive density per 1U that was talked about. It is 12 bays per 1 U in that case, the fact it has 4 nodes inside is a bonus :P
    Uncle doesnt like the velcro approach, he is exactly my opposite as I "tuned" 2 bays 1 U into hosting 1 extra 3.5 drive belly up inside :)
    Those costed a lot of money, indeed. For a start-up SH equipment is better IMO.
    M

  • @pubcrawler said: I thought I was the only one velcroing and improvising drives into units :)

    My current build, dual E5-2620, SC-826 with 12 1tb sas2 then velcro 2 SSD on the inner wall as you see, add in LSI 9266-4i and Intel SAS expander and 128gb ram for extra cache
    image
    SuperMicro offers that case in a model with rear hot swap, but 99c of velco is fine for me.

  • pubcrawlerpubcrawler Banned
    edited November 2012

    @miTgiB, yep, looks like my handiwork :)

    They really need to do something with drive cables. Such a PITA to shimmy in these cases. You have quite the heap there to wire up. 16 controller channels total it looks like.

    Aside from cost, you could pack quite a few SSDs in there :) I see plenty of squirrel stash places in that case :)

  • @pubcrawler said: Aside from cost, you could pack quite a few SSDs in there :) I see plenty of squirrel stash places in that case :)

    Two is plenty for providing read-only cache, and this build has been costing about $5500 so 2 more SSD wouldn't be much additional. I like how Norco uses SFF-8087 on their cases, but the build quality of them is horrid.

  • @serverbear said: Low IO is the new hipster craze.

    I am a trendsetter!

  • Acceptable to me? Sure.

  • MaouniqueMaounique Host Rep, Veteran
    edited November 2012

    Acceptable to me too, but it must be a really good deal.
    30 is my lowest limit for a payed product.

  • @Damian Is that OpenVZ and do you get any fluctuations in your I/O speed?

  • Terribad meme, ServerBear, sorry.

    That said, 30MB/s is just fine for me, I won't stand for that 900ms ioping, though.

  • Don't think I've been on a VPS that had lower than 90MB/sec, most of the current personal ones I have are up over 200, though I don't need that thruput most of the time, 30 would seem like there is something going on with the drive's resources.

  • We using 12x hdds in a raid10 setup + 2x ssds for os.
    I testing here 24x in a raid10 setup, nice :)

    Use iozone for better benchmarks than ioping or dd.

  • @fileMEDIA,

    What do you recommend in iozone flag wise at run time to "emulate" tests we see done with ioping and dd.

    iozone is a new tool to me. :) Thanks for the recommend!

  • You can use automatic mode: iozone -a

    Good documentation with all commands: http://www.iozone.org/docs/IOzone_msword_98.pdf

  • @fileMEDIA said: Use iozone for better benchmarks than ioping or dd.

    Do you have any examples of output?

  • I recall seeing a thread before where @prometeus suggested iozone too. Any usage tips @prometeus?

  • 30MB.s is fine but latency is not. Some sample of full loaded node's VPS:

    [root@ioping ioping-0.6]# ./ioping -c 25 .
    4096 bytes from . (simfs /vz/private/511): request=1 time=18.3 ms
    4096 bytes from . (simfs /vz/private/511): request=2 time=0.6 ms
    4096 bytes from . (simfs /vz/private/511): request=3 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=4 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=5 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=6 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=7 time=0.3 ms
    4096 bytes from . (simfs /vz/private/511): request=8 time=0.3 ms
    4096 bytes from . (simfs /vz/private/511): request=9 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=10 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=11 time=0.6 ms
    4096 bytes from . (simfs /vz/private/511): request=12 time=2.8 ms
    4096 bytes from . (simfs /vz/private/511): request=13 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=14 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=15 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=16 time=0.3 ms
    4096 bytes from . (simfs /vz/private/511): request=17 time=147.2 ms
    4096 bytes from . (simfs /vz/private/511): request=18 time=4.2 ms
    4096 bytes from . (simfs /vz/private/511): request=19 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=20 time=0.3 ms
    4096 bytes from . (simfs /vz/private/511): request=21 time=0.3 ms
    4096 bytes from . (simfs /vz/private/511): request=22 time=5.4 ms
    4096 bytes from . (simfs /vz/private/511): request=23 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=24 time=0.2 ms
    4096 bytes from . (simfs /vz/private/511): request=25 time=0.2 ms

    --- . (simfs /vz/private/511) ioping statistics ---
    25 requests completed in 24185.9 ms, 136 iops, 0.5 mb/s
    min/avg/max/mdev = 0.2/7.3/147.2/28.8 ms
    [root@ioping ioping-0.6]#

  • @Corey said: I'm not sure how they are mediocre for loosing disk IO when their disk fails is what I'm getting at. Should every startup have 8-12 disk raid 10 arrays at the start? Are they stupid and mediocre for not doing this?
    @Corey said: What I'm getting at is that you don't know their prior experience server administration when they first start their business. People generally learn from their mistakes. Startups will make mistakes.

    Here is another place where we disagree. You call it a mistake, I call it a CHOICE.

    Any new host getting into the business should be doing their research. Just LEB and LET alone have plenty of good information/arguments/discussions on what type/size of array a host should be using if they want to avoid their node being slow as molasses if a disk fails or if they want to put lots of VPSs on it.

    You keep talking about this massive/unlimited/#winning budget that is required to do things right, when the reality is it doesn't cost that much more. There are plenty of great new hosts that do everything right and start out with the right hardware, and even those that have figured out ways to do it very inexpensively. Thus, I don't have any sympathy for a new host that is going to try to charge the same prices as everyone else but put their customers on hardware that is going to leave them screwed for days if a disk dies as well as just not performing very well.

  • @JoeMerit said: Here is another place where we disagree. You call it a mistake, I call it a CHOICE.

    Any new host getting into the business should be doing their research. Just LEB and LET alone have plenty of good information/arguments/discussions on what type/size of array a host should be using if they want to avoid their node being slow as molasses if a disk fails or if they want to put lots of VPSs on it.

    When we started up we didn't even know about WHT/Lowendbox. So you are are also assuming that everyone has access to these resources when they first startup. Research can not give you real world results for your particular case, so I disagree that research is going to give you all of the knowledge you need.

    You keep talking about this massive/unlimited/#winning budget that is required to do things right, when the reality is it doesn't cost that much more. There are plenty of great new hosts that do everything right and start out with the right hardware, and even those that have figured out ways to do it very inexpensively. Thus, I don't have any sympathy for a new host that is going to try to charge the same prices as everyone else but put their customers on hardware that is going to leave them screwed for days if a disk dies as well as just not performing very well.

    You state that some hosts have figured out how to do this inexpensively.... what would be the reason for trying to find out how to do something inexpensively at first if you had a #winning budget? When people startup they don't have the resources or knowledge seasoned providers have and there is no way they can gain this knowledge without experience - I don't care what you say.

    Hell - we even asked our first colocation provider for assistance in configuring our first nodes and they lead us down the wrong path knowing 100% what we were doing. We are still trying to phase out the nodes we started out with and we are a very small company. It is not easy for a startup to redo everything once they get set on something. It's not easy to move providers either if their provider is the issue here.

  • Don't think folks that any provider should be anything.

    Plenty of boastful providers who have oversold boxes with massive issues.

    Resources matter always, but these are shared resources. I never expect full performance of a server, because I am not paying for that.

    I care about IOWAIT more than anything and how that can/will/has dragged down the VPS performance. Most of the time, slow disk shouldn't matter much in my use.

    But almost universal rule that when my VPS nodes are lagging it is the disk IO that is the culprit.

    Folks should do what they can to eliminate the IO bottleneck. Pushing core stuff where feasible to SSDs is one method and SSD caching is another.

    The days of spinning drives are numbered. Good for mass storage in mass, but for a few bays, SSDs are the rule now, unless you need the storage density.

Sign In or Register to comment.