Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

When is it okay to complain about disk I/O?
New on LowEndTalk? Please Register and read our Community Rules.

When is it okay to complain about disk I/O?

KairusKairus Member
edited October 2011 in General

I'm curious when you guys think it's okay to complain about slow disk I/O? Especially on a low end box.

For instance, I have a 190MB $15/year box from AlienVPS, and it performs okay, I don't really host much on it, a website that's unused now, and a mumble VOIP server for around 15 people. I'm a performance junkie, so every so often, especially with VPSes considering how many companies oversell, I like to check my servers to make sure everything's at an acceptable level.

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 106.66 s, 10.1 MB/s
ioping -c 10 /
4096 bytes from / (simfs /vz/private/3992): request=1 time=0.2 ms
4096 bytes from / (simfs /vz/private/3992): request=2 time=657.9 ms
4096 bytes from / (simfs /vz/private/3992): request=3 time=14.9 ms
4096 bytes from / (simfs /vz/private/3992): request=4 time=25.8 ms
4096 bytes from / (simfs /vz/private/3992): request=5 time=0.3 ms
4096 bytes from / (simfs /vz/private/3992): request=6 time=5.0 ms
4096 bytes from / (simfs /vz/private/3992): request=7 time=29.4 ms
4096 bytes from / (simfs /vz/private/3992): request=8 time=13.3 ms
4096 bytes from / (simfs /vz/private/3992): request=9 time=16.3 ms
4096 bytes from / (simfs /vz/private/3992): request=10 time=15.6 ms

--- / (simfs /vz/private/3992) ioping statistics ---
10 requests completed in 9780.5 ms, 13 iops, 0.1 mb/s
min/avg/max/mdev = 0.2/77.9/657.9/193.6 ms

So, would you open a ticket regarding this? It's only a $15/year server, should I just expect this level of performance?

What about on a 2GB/$7/mo server (can't resist, I love minecraft), would you open a ticket about:

ioping -c 10 /
4096 bytes from / (simfs /dev/simfs): request=1 time=129.1 ms
4096 bytes from / (simfs /dev/simfs): request=2 time=493.2 ms
4096 bytes from / (simfs /dev/simfs): request=3 time=172.4 ms
4096 bytes from / (simfs /dev/simfs): request=4 time=118.4 ms
4096 bytes from / (simfs /dev/simfs): request=5 time=334.7 ms
4096 bytes from / (simfs /dev/simfs): request=6 time=21.7 ms
4096 bytes from / (simfs /dev/simfs): request=7 time=191.9 ms
4096 bytes from / (simfs /dev/simfs): request=8 time=82.6 ms
4096 bytes from / (simfs /dev/simfs): request=9 time=288.4 ms
4096 bytes from / (simfs /dev/simfs): request=10 time=105.4 ms

--- / ioping statistics ---
10 requests completed in 10949.7 ms
min/avg/max/mdev = 21.7/193.8/493.2/133.6 ms
«1

Comments

  • Turn the screws to these providers for overselling the IO so drasticly

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • My $30/year is much better:

    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 8.22325 seconds, 131 MB/s
    
    4096 bytes from . (ext3 /dev/root): request=1 time=0.3 ms
    4096 bytes from . (ext3 /dev/root): request=2 time=0.8 ms
    4096 bytes from . (ext3 /dev/root): request=3 time=1.0 ms
    4096 bytes from . (ext3 /dev/root): request=4 time=0.9 ms
    4096 bytes from . (ext3 /dev/root): request=5 time=1.4 ms
    4096 bytes from . (ext3 /dev/root): request=6 time=0.8 ms
    4096 bytes from . (ext3 /dev/root): request=7 time=0.7 ms
    4096 bytes from . (ext3 /dev/root): request=8 time=0.3 ms
    4096 bytes from . (ext3 /dev/root): request=9 time=2.3 ms
    4096 bytes from . (ext3 /dev/root): request=10 time=0.7 ms
    
    --- . (ext3 /dev/root) ioping statistics ---
    10 requests completed in 9027.4 ms, 1072 iops, 4.2 mb/s
    min/avg/max/mdev = 0.3/0.9/2.3/0.5 ms
    FreeVPS.us - The oldest post to host VPS provider
  • not really, for $15 it's good, you shouldn't compain at all.

  • rds100rds100 Member
    edited October 2011

    I think for the price you are paying this performance is to be expected. I am not saying it is good, just that it is expected. Which doesn't mean you can't whine about it, but you should whine mostly to yourself anyway.

    -

  • For me, and LEA would agree with me on this (if he is reading this) that at least 25MB/sec should be there even though you are not going to extensively need IO

  • rds100 said: I think for the price you are paying this performance is to be expected.

    How can you say that when a number of $15/yr providers make that 10MB/sec look like it is standing still. I call double standards

    Hostigation High Resource Hosting - SolusVM KVM VPS / Proxmox OpenVZ VPS- Low Cost Comodo SSL Certificates
  • InfinityInfinity Member, Provider
    dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync
    65536+0 records in
    65536+0 records out
    1073741824 bytes (1.1 GB) copied, 14.7817 seconds, 72.6 MB/s
    
    ioping /
    4096 bytes from / (simfs /dev/simfs): request=1 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=2 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=3 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=4 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=5 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=6 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=7 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=8 time=0.2 ms
    
    --- / (simfs /dev/simfs) ioping statistics ---
    8 requests completed in 7034.3 ms, 5155 iops, 20.1 mb/s
    min/avg/max/mdev = 0.2/0.2/0.2/0.0 ms
    

    From a $0 per month OVZ VPS from Hostigation thanks to dmmcintyre3 :) Beat that.

    Cablestreet - London based ISP - Managed Solutions, Carrier Services, Colocation, Dedicated Servers, VMs, and more..

  • @miTgiB i had about the same disk IO with that $15/year provider you have in mind for several months in a row (i have a 15/year on node28). They managed to fix it though, after the move to the new data center. So yes, this disk IO is to be expected. Only a very few providers know how to provide good service at this price level. It is not for everyone.

    -

  • I would be concerned if I/O is slower than 130MB/s, and will complain if below 100MB/s.

  • Anything above 50 MB/s is okay.

  • If I get at least 50MB/s I think its ok, the main problem its that many providers dont accept the result from DD as a valid benchmark.

    For example on my qualityservers vps (uk node) Im lucky if i get past 14MB/s most of the time its in the 9MB/s.

  • drmikedrmike Member
    edited October 2011

    We've had providers who have said that ~10MB/sec is fine....

  • InfinityInfinity Member, Provider
    edited October 2011

    My QualityServers VPS got max 500kb/s the last time(s) I checked..

    Cablestreet - London based ISP - Managed Solutions, Carrier Services, Colocation, Dedicated Servers, VMs, and more..

  • @Infinity but how does that make you feel?

  • InfinityInfinity Member, Provider
    edited October 2011

    drmike said: @Infinity but how does that make you feel?

    Confused to say the least.

    EDIT: Done a few more tests, and to do more at different times of day but so far it seems they have bumped up to around 12MB/s. I just need to do an ioping now.

    Damn, I suspect something is wrong with glibc:

     /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

    Cablestreet - London based ISP - Managed Solutions, Carrier Services, Colocation, Dedicated Servers, VMs, and more..

  • Google shows a couple of threads where 32 and 64 bit software has been mixed.

    You're not doing that, are you?

  • I complain if the IO is under 50 MB/s.

  • fanfan Member

    maxexcloo said: I complain if the IO is under 50 MB/s.

    If I am you, I'll be complaining everyday. Actually for a lowendbox, anything > 20mb/s (some providers utilize raid 1 instead of 10 for a small and less dense node) is just fine, but this is already the lowest acceptable value.

  • maxexcloo said: I complain if the IO is under 50 MB/s.

    I had a VPS on a node when the host was testing Xen (basically an empty node), and the dd result was around 50 MB/s.

    FreeVPS.us - The oldest post to host VPS provider
  • 10 mb/s is not really good, try it at different times, if 10 mb/s is minimum you're getting (and not maximum) then this is not a problem imo.

    ☻☻ VPS ☺ as of now:- 384-256-128-512x2 ☺☺

  • Asim said: For me, and LEA would agree with me on this...

    Speak for yourself :)

    I am actually happy with a consistent 20MB/sec, but as it has been mentioned that DD test is flawed. It's just better to have consistent IO latency than having 200MB/sec one day, and then 5MB/sec the day after.

    Blog at LowEndBox.com.

  • Anything above 40MB/sec is stable.

    @danielfeng
    Seriously? I think you need to get out of the budget VPS range then.. Single SATA drives rarely even get above 100MB/sec

  • danielfengdanielfeng Member
    edited October 2011

    VMport said: Anything above 40MB/sec is stable.

    @danielfeng Seriously? I think you need to get out of the budget VPS range then.. Single SATA drives rarely even get above 100MB/sec

    I think budget means more work on stability, security and maintenance, but not necessarily on disk/network performance. Why does budget VPS have to use single SATA drive? RAID10 can also be used on budget VPS.

    Look at my tests on 3 VPS. 2 OVZ and 1 KVM:

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 7.65535 s, 140 MB/s

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 7.20065 s, 149 MB/s

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 6.98297 s, 154 MB/s

    They are all budget VPS.

    PS: ~40MB/s usually means (sort of) overselling. Agree?

  • danielfeng said: PS: ~40MB/s usually means (sort of) overselling. Agree?

    It depends on the disk setup. My development box gets around 35MB/s all the time, and it's real hardware. High latency on disk reads is a bigger issue than low write speeds.

    FreeVPS.us - The oldest post to host VPS provider
  • DD Write tests are not really an accurate way to test the I/O as a whole, IOPing is a much better test of average performance. And yes RAID10 is used on budget VPS and so it should be.. but that does not mean that the tests should show to be above 100MB/sec.

    Afterall there is not really that much you could run that would require a write speed that high anyway.

    However if speeds are showing anything under 40MB/s its pretty safe to assume its heavily used.

    Thanked by 1Infinity
  • yomeroyomero Member
    edited October 2011

    So, this is a crap @VMport ???

    4096 bytes from . (simfs /dev/simfs): request=1 time=0.5 ms
    4096 bytes from . (simfs /dev/simfs): request=2 time=151.0 ms
    4096 bytes from . (simfs /dev/simfs): request=3 time=12.0 ms
    4096 bytes from . (simfs /dev/simfs): request=4 time=12.9 ms
    4096 bytes from . (simfs /dev/simfs): request=5 time=22.2 ms
    4096 bytes from . (simfs /dev/simfs): request=6 time=25.9 ms
    4096 bytes from . (simfs /dev/simfs): request=7 time=33.7 ms
    4096 bytes from . (simfs /dev/simfs): request=8 time=375.8 ms
    4096 bytes from . (simfs /dev/simfs): request=9 time=227.9 ms
    4096 bytes from . (simfs /dev/simfs): request=10 time=383.3 ms
    
    --- . (simfs /dev/simfs) ioping statistics ---
    10 requests completed in 10255.7 ms, 8 iops, 0.0 mb/s
    min/avg/max/mdev = 0.5/124.5/383.3/145.1 ms
    

    From a known 2GB provider...

    I am still thinking on using it.. or not

  • @yomero very bad, should be below 0.5ms most of the time.

    FreeVPS.us - The oldest post to host VPS provider
  • Anyone know if this is good or bad?

    7 requests completed in 6108.5 ms, 8986 iops, 35.1 mb/s
    min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms

    Postgres

  • KairusKairus Member
    edited October 2011

    justinb said: Anyone know if this is good or bad?

    7 requests completed in 6108.5 ms, 8986 iops, 35.1 mb/s min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms

    8986 iops is amazing, min/max latency is 0.1, can't get better than that. Is that on an SSD?

  • You are just a presumptuous ¬_¬

  • kiloservekiloserve Member
    edited October 2011

    justinb said: 7 requests completed in 6108.5 ms, 8986 iops, 35.1 mb/s min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms

    Excellent!

    The thing to remember is that ioping is a read test. Write tests are also important.

    However, with your reads that scorchingly high, I'm sure your writes will also be quite nice.

  • min/avg/max/mdev = 0.1/0.1/0.1/0.0 ms
    This shows that everything was read from memory (cached). No disk seeking was involved at all. No rotating disk can seek for 0.1 ms.

    -

  • justinbjustinb Member
    edited October 2011

    kiloserve said: The thing to remember is that ioping is a read test. Write tests are also important. However, with your reads that scorchingly high, I'm sure your writes will also be quite nice.

    I get this thing where "touch x" takes nearly 10 seconds to respond, but "cat x" is instant... is there a write test for ioping?

    Postgres

  • I haven't done any ioping tests before. If my VPSes don't feel slow, I don't care. Here are my results though:

    This box is sat idle most of the time, and is idle right now.

    QualityVPS OpenVZ
    10 requests completed in 9005.3 ms, 3137 iops 12.3 mb/s
    min/avg/max/mdev = 0.1/0.3/0.7/0.2 ms

    This next VPS is under low to moderate cpu load, and low disk load.

    Veeble OpenVZ
    10 requests completed in 9037.7 ms, 321 iops 1.3 mb/s
    min/avg/max/mdev = 0.3/3.1/20.9/6.2 ms

    The host of the next two is running at 100% CPU load right now, encoding video. Moderate disk load, but on another drive iirc.

    VM at home running under virtualbox
    10 requests completed in 9012.6 ms, 4165 iops 16.3 mb/s
    min/avg/max/mdev = 0.2/0.2/0.4/0.1 ms

    another VM on the same host as above, virtualbox also
    10 requests completed in 9020.1 ms, 3014 iops 11.8 mb/s
    min/avg/max/mdev = 0.2/0.3/0.8/0.2 ms

    This VPS from Thrust is sat idle right now.

    ThrustVPS OpenVZ
    10 requests completed in 9008.6 ms, 6238 iops 24.4 mb/s
    min/avg/max/mdev = 0.1/0.2/0.2/0.0 ms

    I don't intend to use this box for anything much, since onedollarvps/hostsign are clearly scammers. Ignoring support tickets, promising people account credits for downtime that never gets applied...

    onedollarvps OpenVZ
    10 requests completed in 9192.9 ms, 54 iops 0.2 mb/s
    min/avg/max/mdev = 0.4/18.5/45.1/14.1 ms

    I bought the following box because it was so cheap. I'm considering using it to replace the Veeble box. Still not sure.

    VML IT OpenVZ
    10 requests completed in 9031.7 ms, 2266 iops 8.9 mb/s
    min/avg/max/mdev = 0.1/0.4/3.6/1.0 ms

    This box is sat idle, it's just used for IRC bncs etc. Little to no disk IO goes on so the bad drive performance doesn't bother me. Uptime is all that matters. It is the worst performing box I have though.

    QualityVPS XEN
    10 requests completed in 9525.5 ms, 20 iops 0.1 mb/s
    min/avg/max/mdev = 0.2/50.2/181.3/56.2 ms

  • Ohhh what a big collection you have :P

  • I'm sure there are people with way more than me. ;)

    2 of those are hosted at home, so not VPS in the traditional sense.

  • InfinityInfinity Member, Provider

    Gary said: I'm sure there are people with way more than me. ;)

    Very true. I know people with 30+ VPS's.

    Cablestreet - London based ISP - Managed Solutions, Carrier Services, Colocation, Dedicated Servers, VMs, and more..

  • Infinity said: Very true. I know people with 30+ VPS's.

    They need vps anonymous man lol.

    Catch me over at Primary DNS. If you want to chat I am done with this cesspool.

  • kiloservekiloserve Member
    edited October 2011

    justinb said: I get this thing where "touch x" takes nearly 10 seconds to respond, but "cat x" is instant... is there a write test for ioping?

    That is odd indeed.

    ioping is a read only test. As the name suggests, it's basically a read ping on files and much like ping, will give you the response time it takes to access that file.

    It is fairly accurate but can be fooled by smart disk caching as it is a readonly test and if the files had been read and cached, the result will be high.

    The standard write test is the dd one which basically writes a sequential file of a size you determine.

    Example:

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.04994 seconds, 352 MB/s

    bs = block size. Many use the 64k block size as that's the typical RAID stripe size. You can change the count to 32k if you want a 2GB file or lower it to get a smaller file.

    The ending "xxx MB/s" is the number you use to gauge your performance.

    Some have criticized the dd test as it is sequential writes as opposed to random writes. However, from my experience, you'll find random writes to be pretty much in the performance ballpark of the sequential writes.

    For example, If you get great sequential writes, your random writes will also be great. If you get average sequential writes, you will get average random writes. If you get poor sequential writes, your random writes will also be poor.

    If you want a more in-depth examination of your disk i/o; you can take the time to compile Bonnie++. But truth is Bonnie will pretty much tell you the same as DD and IOPing will tell you but just take alot longer to get more exact numbers along with random tests.

    Thanked by 2Ash_Hawkridge jamson
  • kiloserve just served up some good advice right there.

  • sleddogsleddog Member
    edited October 2011

    kiloserve said: For example, If you get great sequential writes, your random writes will also be great. If you get average sequential writes, you will get average random writes. If you get poor sequential writes, your random writes will also be poor.

    But sequential disk access should not be the sole measure of the "quality" of a hosting provider. Sometimes -- oftentimes -- it is:

    Test #1 at Provider A shows 200 MB/sec.

    Test #1 at Provider B shows 50 MB/sec.

    Therefore Provider A is better.

    False.

  • danielfengdanielfeng Member
    edited October 2011

    sleddog said: Test #1 at Provider A shows 200 MB/sec.

    Test #1 at Provider B shows 50 MB/sec.

    Therefore Provider A is better.

    False.

    Test #1 at Provider A (initially when it's not overselling): 200MB/s.

    Test #2 at Provider A (later when it's overselling): 50MB/s.

    Therefore sequential disk access DOES reflect the quality of THAT hosting provider, in this case.

    True.

  • kiloservekiloserve Member
    edited October 2011

    @VMPort: You are too kind :)

    sleddog said: But sequential disk access should not be the sole measure of the "quality" of a hosting provider. Sometimes -- oftentimes -- it is:

    Test #1 at Provider A shows 200 MB/sec.

    Test #1 at Provider B shows 50 MB/sec.

    Therefore Provider A is better.

    False.

    I agree it is not a whole measure of quality of a host. I'm only speaking on disk access.

    With that much variance of 200MB/s vs 50MB/s, the 200MB/s will definitely have much better speed.

    The grey area is when it's close.

    If you have 70MB/s vs 50MB/s you can't definitively say which one is faster.

    But between something like 50MB/s vs 200MB/s (150 MB/s difference) you don't need addtional info on diskspeed, the 200MB/s will have faster diskspeed without even wasting time to run ioping or Bonnie++

  • danielfeng said: Test #1 at Provider A (initially when it's not overselling): 200MB/s.

    Test #2 at Provider A (later when it's overselling): 50MB/s.

    Therefore sequential disk access DOES reflect the quality of THAT hosting provider, in this case.

    True.

    Agreed !

    When a host shows wild variance like that it's a warning. Next week it might be 50 KB.

  • kiloserve said: I agree it is not a whole measure of quality of a host. I'm only speaking on disk access.

    I think you're speaking of sequential disk write speed (correct me if I'm wrong). Not general disk access speed. Most servers read far more than they write, and it isn't correct (IMHO) to judge solely on sequential write speed.

    But you work in the industry, and are probably far more experienced that I, so I won't argue any more with your conclusions :)

  • kiloservekiloserve Member
    edited October 2011

    sleddog said: I think you're speaking of sequential disk write speed (correct me if I'm wrong). Not general disk access speed. Most servers read far more than they write, and it isn't correct (IMHO) to judge solely on sequential write speed.

    Yes, I'm speaking of sequential writes. However, random writes and sequential writes correllate quite well.

    For example.

    Server A --> Sequential Write Speed = 200MB/s
    Server B --> Sequential Write Speed = 50MB/s
    

    Without even looking at a Random write benchmark I can say:

    Server A is faster than Server B every time, even on Random writes.

    There's just too much disparity there. 200MB/s is extremely fast and difficult to achieve on a loaded VPS server. How many times do we actually see a 200MB/s+ on a production VPS server? It's very rare because 200MB/s is so difficult to achieve and maintain.

    To get that 200MB/s sequential write speed on a VPS server, you must have some very nice hardware, it's not going to utterly put up a fail and lose to hardware that can only do 50MB/s sequential. The 50MB/s sequential will have a correlating average random write score that will put it in the same category with other servers putting up ~50 to 70 MB/s sequentials.

    sleddog said: But you work in the industry, and are probably far more experienced that I, so I won't argue any more with your conclusions :)

    Oh don't be modest, you're one of the experts here and I've learned plenty from you here and from snooping around some of your coding. :)

    Besides, what fun is a forum if we can't argue?

  • kiloserve said: Besides, what fun is a forum if we can't argue?

    I disagree strongly. A forum can be lots of fun without arguing! :)

    Thanks Kiloserve. I appreciate that you've taken the time to explain your perspective.

  • danielfeng said: Test #1 at Provider A (initially when it's not overselling): 200MB/s.

    Test #2 at Provider A (later when it's overselling): 50MB/s.

    Therefore sequential disk access DOES reflect the quality of THAT hosting provider, in this case.

    True.

    Unfortunately this becomes true in real life.

    I'm talking about the eNetSouth 2GB OpenVZ @ San Jose offer. On ovz7 the speed was normally 150+MB/s and I was really satisfied, however recently (within one week I would say) it dramatically reduced to ~50MB/s (worst was ~20MB/s).

    I'm not saying they ARE overselling but LOOKS LIKE that's the case.

Sign In or Register to comment.