Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


When is it okay to complain about disk I/O? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

When is it okay to complain about disk I/O?

2

Comments

  • kiloservekiloserve Member
    edited October 2011

    justinb said: 7 requests completed in 6108.5 ms, 8986 iops, 35.1 mb/s min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms

    Excellent!

    The thing to remember is that ioping is a read test. Write tests are also important.

    However, with your reads that scorchingly high, I'm sure your writes will also be quite nice.

  • min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms
    This shows that everything was read from memory (cached). No disk seeking was involved at all. No rotating disk can seek for 0.1 ms.

  • @Yomero
    Thats poor :P

  • justinbjustinb Member
    edited October 2011

    kiloserve said: The thing to remember is that ioping is a read test. Write tests are also important. However, with your reads that scorchingly high, I'm sure your writes will also be quite nice.

    I get this thing where "touch x" takes nearly 10 seconds to respond, but "cat x" is instant... is there a write test for ioping?

  • I haven't done any ioping tests before. If my VPSes don't feel slow, I don't care. Here are my results though:

    This box is sat idle most of the time, and is idle right now.

    QualityVPS OpenVZ
    10 requests completed in 9005.3 ms, 3137 iops 12.3 mb/s
    min/avg/max/mdev = 0.1/0.3/0.7/0.2 ms

    This next VPS is under low to moderate cpu load, and low disk load.

    Veeble OpenVZ
    10 requests completed in 9037.7 ms, 321 iops 1.3 mb/s
    min/avg/max/mdev = 0.3/3.1/20.9/6.2 ms

    The host of the next two is running at 100% CPU load right now, encoding video. Moderate disk load, but on another drive iirc.

    VM at home running under virtualbox
    10 requests completed in 9012.6 ms, 4165 iops 16.3 mb/s
    min/avg/max/mdev = 0.2/0.2/0.4/0.1 ms

    another VM on the same host as above, virtualbox also
    10 requests completed in 9020.1 ms, 3014 iops 11.8 mb/s
    min/avg/max/mdev = 0.2/0.3/0.8/0.2 ms

    This VPS from Thrust is sat idle right now.

    ThrustVPS OpenVZ
    10 requests completed in 9008.6 ms, 6238 iops 24.4 mb/s
    min/avg/max/mdev = 0.1/0.2/0.2/0.0 ms

    I don't intend to use this box for anything much, since onedollarvps/hostsign are clearly scammers. Ignoring support tickets, promising people account credits for downtime that never gets applied...

    onedollarvps OpenVZ
    10 requests completed in 9192.9 ms, 54 iops 0.2 mb/s
    min/avg/max/mdev = 0.4/18.5/45.1/14.1 ms

    I bought the following box because it was so cheap. I'm considering using it to replace the Veeble box. Still not sure.

    VML IT OpenVZ
    10 requests completed in 9031.7 ms, 2266 iops 8.9 mb/s
    min/avg/max/mdev = 0.1/0.4/3.6/1.0 ms

    This box is sat idle, it's just used for IRC bncs etc. Little to no disk IO goes on so the bad drive performance doesn't bother me. Uptime is all that matters. It is the worst performing box I have though.

    QualityVPS XEN
    10 requests completed in 9525.5 ms, 20 iops 0.1 mb/s
    min/avg/max/mdev = 0.2/50.2/181.3/56.2 ms

  • Ohhh what a big collection you have :P

  • I'm sure there are people with way more than me. ;)

    2 of those are hosted at home, so not VPS in the traditional sense.

  • InfinityInfinity Member, Host Rep

    Gary said: I'm sure there are people with way more than me. ;)

    Very true. I know people with 30+ VPS's.

  • Infinity said: Very true. I know people with 30+ VPS's.

    They need vps anonymous man lol.

  • @AuroraZ
    Lmao, good one :P

  • kiloservekiloserve Member
    edited October 2011

    justinb said: I get this thing where "touch x" takes nearly 10 seconds to respond, but "cat x" is instant... is there a write test for ioping?

    That is odd indeed.

    ioping is a read only test. As the name suggests, it's basically a read ping on files and much like ping, will give you the response time it takes to access that file.

    It is fairly accurate but can be fooled by smart disk caching as it is a readonly test and if the files had been read and cached, the result will be high.

    The standard write test is the dd one which basically writes a sequential file of a size you determine.

    Example:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.04994 seconds, 352 MB/s

    bs = block size. Many use the 64k block size as that's the typical RAID stripe size. You can change the count to 32k if you want a 2GB file or lower it to get a smaller file.

    The ending "xxx MB/s" is the number you use to gauge your performance.

    Some have criticized the dd test as it is sequential writes as opposed to random writes. However, from my experience, you'll find random writes to be pretty much in the performance ballpark of the sequential writes.

    For example, If you get great sequential writes, your random writes will also be great. If you get average sequential writes, you will get average random writes. If you get poor sequential writes, your random writes will also be poor.

    If you want a more in-depth examination of your disk i/o; you can take the time to compile Bonnie++. But truth is Bonnie will pretty much tell you the same as DD and IOPing will tell you but just take alot longer to get more exact numbers along with random tests.

    Thanked by 2Ash_Hawkridge jamson
  • kiloserve just served up some good advice right there.

  • sleddogsleddog Member
    edited October 2011

    kiloserve said: For example, If you get great sequential writes, your random writes will also be great. If you get average sequential writes, you will get average random writes. If you get poor sequential writes, your random writes will also be poor.

    But sequential disk access should not be the sole measure of the "quality" of a hosting provider. Sometimes -- oftentimes -- it is:

    Test #1 at Provider A shows 200 MB/sec.

    Test #1 at Provider B shows 50 MB/sec.

    Therefore Provider A is better.

    False.

  • danielfengdanielfeng Member
    edited October 2011

    sleddog said: Test #1 at Provider A shows 200 MB/sec.

    Test #1 at Provider B shows 50 MB/sec.

    Therefore Provider A is better.

    False.

    Test #1 at Provider A (initially when it's not overselling): 200MB/s.

    Test #2 at Provider A (later when it's overselling): 50MB/s.

    Therefore sequential disk access DOES reflect the quality of THAT hosting provider, in this case.

    True.

  • kiloservekiloserve Member
    edited October 2011

    @VMPort: You are too kind :)

    sleddog said: But sequential disk access should not be the sole measure of the "quality" of a hosting provider. Sometimes -- oftentimes -- it is:

    Test #1 at Provider A shows 200 MB/sec.

    Test #1 at Provider B shows 50 MB/sec.

    Therefore Provider A is better.

    False.

    I agree it is not a whole measure of quality of a host. I'm only speaking on disk access.

    With that much variance of 200MB/s vs 50MB/s, the 200MB/s will definitely have much better speed.

    The grey area is when it's close.

    If you have 70MB/s vs 50MB/s you can't definitively say which one is faster.

    But between something like 50MB/s vs 200MB/s (150 MB/s difference) you don't need addtional info on diskspeed, the 200MB/s will have faster diskspeed without even wasting time to run ioping or Bonnie++

  • danielfeng said: Test #1 at Provider A (initially when it's not overselling): 200MB/s.

    Test #2 at Provider A (later when it's overselling): 50MB/s.

    Therefore sequential disk access DOES reflect the quality of THAT hosting provider, in this case.

    True.

    Agreed !

    When a host shows wild variance like that it's a warning. Next week it might be 50 KB.

  • kiloserve said: I agree it is not a whole measure of quality of a host. I'm only speaking on disk access.

    I think you're speaking of sequential disk write speed (correct me if I'm wrong). Not general disk access speed. Most servers read far more than they write, and it isn't correct (IMHO) to judge solely on sequential write speed.

    But you work in the industry, and are probably far more experienced that I, so I won't argue any more with your conclusions :)

  • kiloservekiloserve Member
    edited October 2011

    sleddog said: I think you're speaking of sequential disk write speed (correct me if I'm wrong). Not general disk access speed. Most servers read far more than they write, and it isn't correct (IMHO) to judge solely on sequential write speed.

    Yes, I'm speaking of sequential writes. However, random writes and sequential writes correllate quite well.

    For example.

    Server A --> Sequential Write Speed = 200MB/s
    Server B --> Sequential Write Speed = 50MB/s
    

    Without even looking at a Random write benchmark I can say:

    Server A is faster than Server B every time, even on Random writes.

    There's just too much disparity there. 200MB/s is extremely fast and difficult to achieve on a loaded VPS server. How many times do we actually see a 200MB/s+ on a production VPS server? It's very rare because 200MB/s is so difficult to achieve and maintain.

    To get that 200MB/s sequential write speed on a VPS server, you must have some very nice hardware, it's not going to utterly put up a fail and lose to hardware that can only do 50MB/s sequential. The 50MB/s sequential will have a correlating average random write score that will put it in the same category with other servers putting up ~50 to 70 MB/s sequentials.

    sleddog said: But you work in the industry, and are probably far more experienced that I, so I won't argue any more with your conclusions :)

    Oh don't be modest, you're one of the experts here and I've learned plenty from you here and from snooping around some of your coding. :)

    Besides, what fun is a forum if we can't argue?

  • kiloserve said: Besides, what fun is a forum if we can't argue?

    I disagree strongly. A forum can be lots of fun without arguing! :)

    Thanks Kiloserve. I appreciate that you've taken the time to explain your perspective.

  • danielfeng said: Test #1 at Provider A (initially when it's not overselling): 200MB/s.

    Test #2 at Provider A (later when it's overselling): 50MB/s.

    Therefore sequential disk access DOES reflect the quality of THAT hosting provider, in this case.

    True.

    Unfortunately this becomes true in real life.

    I'm talking about the eNetSouth 2GB OpenVZ @ San Jose offer. On ovz7 the speed was normally 150+MB/s and I was really satisfied, however recently (within one week I would say) it dramatically reduced to ~50MB/s (worst was ~20MB/s).

    I'm not saying they ARE overselling but LOOKS LIKE that's the case.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @danielfeng said: Unfortunately this becomes true in real life.

    I'm talking about the eNetSouth 2GB OpenVZ @ San Jose offer. On ovz7 the speed was normally 150+MB/s and I was really satisfied, however recently (within one week I would say) it dramatically reduced to ~50MB/s (worst was ~20MB/s).

    I'm not saying they ARE overselling but LOOKS LIKE that's the case.

    Given they're likely to keep enough float to carry the softlayer bills through i'm not really surprised.

    Francisco

  • @danielfeng You do realize that what your saying is that you need to be on a raid 10 box no matter what, of course the hardware is more powerful, but that also means there are more customers on that box. That means there is also more of a chance for someone to abuse the disk I/O. That is not always bad, if you have a great provider that actually can find and DOES something about abusers. There are a lot of great providers here and on WHT that do this as well.

    We have plans to run raid10 boxes very soon, but as of now we have raid1 boxes(and a single raid5 box). Our raid1 boxes with WD 2.5" Enterprise VelociRaptors and LSI Hardware Raid controller cards start out at 90MB/s. We do realize that we can build raid10 boxes with 32GB ram instead of raid1 with 8GB and host more customers for a better profit in the long run way more comfortably.

    That being said, 50MB/s is PLENTY enough for any application you will be running on that machine. First of all, if you need the whole 50MB/s to yourself you would be considered abusing resources and you do not need to be on a VPS shared with 100's of other customers on a single raid10 box anyway.

    Every command you execute on a box with 50MB/s disk I/O will execute extremely fast. We aren't playing World of Warcraft or Crysis on these VPS my friend, you will not notice a difference between a 50MB/s box and a 100MB/s box, even with minecraft. If you do, it's because you have a TON of mods installed on minecraft and you are abusing resources. Of course you may see statistical differences with disk read and write latency but this will not translate into noticeable differences while you are using the application.

    All of that put aside, I just checked our LA Node and it is degraded to 53.2MB/s from ~90MB/s when it was with 0 customers. This 8GB of RAM box is completely full, if we were to open orders on this box it would likely start swapping. It isn't 'oversold' at all.

    We have had 0 complaints from this box and everyone on it is very happy with the services we are providing. I just can't see why any single person would require as much I/O as you insist.

  • Well if you need a whole 150MB/s to yourself, should never consider getting low end boxes. If you do need that sort of write speed for your web or application server, consider getting a dedicated server instead. Then thats a dedicated resource you can utilize on. End of the day, you should know what sort of infra you need to run your web ventures. Expectation in nature is always being raise up to new levels but never or very rare occassion where expectation being set to reasonable standards isnt it ?

  • @cripperz I think you may be right as far as I can tell from your bad english ;) , but we are dealing with low end boxes here and you would think most everyone would have more reasonable standards.

  • kiloservekiloserve Member
    edited November 2011

    cripperz said: Well if you need a whole 150MB/s to yourself, should never consider getting low end boxes.

    There are quite a few LEB providers here who do 100 MB/s+ speeds. To me, 100MB/s+ is very fast.

    Here's my enetsouth KVM box...I've had it about a month now and speeds are consistently above 100 MB/s:

    [root@enetbox ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 10.6382 seconds, 101 MB/s
  • kiloserve said: LEB providers here who do 100 MB/s+ speeds.

    Sure there are a few who do, but the point cripperz was making was if you need it all.

  • kiloservekiloserve Member
    edited November 2011

    @miTgiB said: Sure there are a few who do, but the point cripperz was making was if you need it all.

    Your VPS also hits over 150MB/s and you also provide all the other amenities like support and such.

    I don't like this generalized discrimination against LEB providers.

    I use LEB providers here and I can tell that several LEB providers here take pride in the boxes they serve up and rightly so.

    LEB <> poor performance, slow support, etc, etc.

  • @kiloserve I agree totally that LEB is a valid way to make a decent living if you take pride in your work. I see LEB (or unmanaged service) as a way I can run as a one man show and still gain some size. Offering full support and all the hand holding that goes along with it, I'd be terrible, I'm not a good teacher, but talking shop with other like minded geeks, I can do it all day. Some of the ideas I see my customers try to implement are really cool, so I lend a hand were I can to help their project along.

    I didn't take cripperz statement as anything other than if you need 150MB/s all the time a dedi is a better place to look. If I have a customer burning 150MB/s in IO 24/7, I'm going to have more who are totally pissed at the poor performance they are getting. That is the part I think you are confusing over this.

  • kiloservekiloserve Member
    edited November 2011

    @miTgiB said: I didn't take cripperz statement as anything other than if you need 150MB/s all the time

    Ah, I guess the interpretation is different between what you read and I read.
    "Well if you need a whole 150MB/s to yourself"

    I thought he meant 150 MB/S, "burst/whenever it is needed", which I think is acceptable for a LEB

    You thought he meant 150 MB/s, "ALL the time aka 24/7 constant use or something like that", which is not acceptable for a LEB or any VPS really.

    I don't think @Danielfeng runs his disk 24/7 at 100MB/s+, he just wants it fast when he needs it.

  • @miTgiB said: Sure there are a few who do, but the point cripperz was making was if you need it all.

    The point actually is:

    ~150MB/s or let's say 100MB/s in test may guarantee 50MB/s average in peak time. You are not doing writing test all the time do you? But 50MB/s in test may only guarantee 10MB/s or even 5MB/s average in peak time. I know some poor guys only got ~100KB/s in peak time.

    So here's the point: as long as there are other providers who are able to provide ~150MB/s I/O for an LEB while some are only able to do ~50MB/s, people will vote with their feet, no matter they are LEB users or not.

Sign In or Register to comment.