Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


31mb/sec in the DD 'test': acceptable to you?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

31mb/sec in the DD 'test': acceptable to you?

DamianDamian Member
edited November 2012 in General
root@  [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 30.8495 seconds, 34.8 MB/s
root@  [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 32.2204 seconds, 33.3 MB/s
root@  [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 33.7744 seconds, 31.8 MB/s
root@  [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 37.6231 seconds, 28.5 MB/s
root@  [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 39.1599 seconds, 27.4 MB/s
root@  [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; unlink test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 31.3246 seconds, 34.3 MB/s
root@  [~]#
./ioping -c 25 /
4096 bytes from / (simfs /dev/simfs): request=1 time=0.1 ms
4096 bytes from / (simfs /dev/simfs): request=2 time=4.9 ms
4096 bytes from / (simfs /dev/simfs): request=3 time=23.0 ms
4096 bytes from / (simfs /dev/simfs): request=4 time=33.1 ms
4096 bytes from / (simfs /dev/simfs): request=5 time=24.3 ms
4096 bytes from / (simfs /dev/simfs): request=6 time=14.1 ms
4096 bytes from / (simfs /dev/simfs): request=7 time=25.8 ms
4096 bytes from / (simfs /dev/simfs): request=8 time=15.5 ms
4096 bytes from / (simfs /dev/simfs): request=9 time=12.6 ms
4096 bytes from / (simfs /dev/simfs): request=10 time=806.1 ms
4096 bytes from / (simfs /dev/simfs): request=11 time=414.0 ms
4096 bytes from / (simfs /dev/simfs): request=12 time=434.4 ms
4096 bytes from / (simfs /dev/simfs): request=13 time=14.9 ms
4096 bytes from / (simfs /dev/simfs): request=14 time=21.0 ms
4096 bytes from / (simfs /dev/simfs): request=15 time=141.9 ms
4096 bytes from / (simfs /dev/simfs): request=16 time=29.0 ms
4096 bytes from / (simfs /dev/simfs): request=17 time=814.7 ms
4096 bytes from / (simfs /dev/simfs): request=18 time=30.3 ms
4096 bytes from / (simfs /dev/simfs): request=19 time=7.4 ms
4096 bytes from / (simfs /dev/simfs): request=20 time=0.4 ms
4096 bytes from / (simfs /dev/simfs): request=21 time=0.2 ms
4096 bytes from / (simfs /dev/simfs): request=22 time=122.2 ms
4096 bytes from / (simfs /dev/simfs): request=23 time=153.6 ms
4096 bytes from / (simfs /dev/simfs): request=24 time=133.0 ms
4096 bytes from / (simfs /dev/simfs): request=25 time=18.8 ms

--- / (simfs /dev/simfs) ioping statistics ---
25 requests completed in 27320.1 ms, 8 iops, 0.0 mb/s
min/avg/max/mdev = 0.1/131.8/814.7/229.9 ms

The VPS itself generally feels sluggish. I opened a support ticket with the provider, asking if it were possible to be moved to a different node. Response told me that 30mb/sec is good i/o.

What are your thoughts?

«13

Comments

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    Over 80MB/s is acceptable for me.

  • I guess 30 mb/s is fine for running a website or so, but 800ms ioping definitively isn't.

  • I think the provider should be named ;)

  • emilvemilv Member
    edited November 2012

    Nope. I'd ask for a move or get a refund.

  • @Damian said: Response told me that 30mb/sec is good i/o.

    While true, your response can be equally condescending, a colorful cancel request.

  • rm_rm_ IPv6 Advocate, Veteran
    edited November 2012

    I would consider tolerating 30 MB/sec dd only if ioping would flatline at 0.1ms at all times. In this case the overall performance would probably be more than acceptable.

    But your ioping is horrible and shows the node is definitely WAY OVERSOLD.

  • jarjar Patron Provider, Top Host, Veteran

    It's good enough if you aren't sharing it.

  • Would cancel if it kept up like that for more than a couple hours. Even the darlings of LEB/LET will occasionally run into problems.

  • Yea 30MB/s is fine... it's the IOPING that's terrible. But in all reality I would probably demand at least 50MB/s

  • DamianDamian Member
    edited November 2012

    @NickO said: I think the provider should be named ;)

    I'm somewhat apprehensive about that, since this provider doesn't seem to sell this location anymore. I was hoping the fact that i'd been with them for over a year would help convince them to move me to a different node; instead it appears they would prefer a cancellation.

    Was mostly asking to determine if i'm over-expecting (since I wouldn't tolerate my own nodes being this laggy), or if I need to move on.

  • KuJoeKuJoe Member, Host Rep
    edited November 2012

    I have USB drives that do 20-30MB/s. That being said...
    Anything constantly below 100MB/s we investigate.
    Anything constantly below 50MB/s we replace (ask our blades how that worked out for them).

  • Looks like ioping is what I need to be concerned with, not dd testing...

    4096 bytes from / (simfs /dev/simfs): request=1 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=2 time=203.6 ms
    4096 bytes from / (simfs /dev/simfs): request=3 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=4 time=3.8 ms
    4096 bytes from / (simfs /dev/simfs): request=5 time=4.9 ms
    4096 bytes from / (simfs /dev/simfs): request=6 time=0.1 ms
    4096 bytes from / (simfs /dev/simfs): request=7 time=25.8 ms
    4096 bytes from / (simfs /dev/simfs): request=8 time=30.7 ms
    4096 bytes from / (simfs /dev/simfs): request=9 time=30.4 ms
    4096 bytes from / (simfs /dev/simfs): request=10 time=88.7 ms
    4096 bytes from / (simfs /dev/simfs): request=11 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=12 time=10.6 ms
    4096 bytes from / (simfs /dev/simfs): request=13 time=471.1 ms
    4096 bytes from / (simfs /dev/simfs): request=14 time=13.3 ms
    4096 bytes from / (simfs /dev/simfs): request=15 time=14.3 ms
    4096 bytes from / (simfs /dev/simfs): request=16 time=46.0 ms
    4096 bytes from / (simfs /dev/simfs): request=17 time=0.2 ms
    4096 bytes from / (simfs /dev/simfs): request=18 time=664.4 ms
    4096 bytes from / (simfs /dev/simfs): request=19 time=72.8 ms
    4096 bytes from / (simfs /dev/simfs): request=20 time=24.4 ms
    4096 bytes from / (simfs /dev/simfs): request=21 time=11.3 ms
    4096 bytes from / (simfs /dev/simfs): request=22 time=843.2 ms
    4096 bytes from / (simfs /dev/simfs): request=23 time=23.5 ms
    4096 bytes from / (simfs /dev/simfs): request=24 time=15.1 ms
    4096 bytes from / (simfs /dev/simfs): request=25 time=478.2 ms
    
    --- / (simfs /dev/simfs) ioping statistics ---
    25 requests completed in 27102.5 ms, 8 iops, 0.0 mb/s
    min/avg/max/mdev = 0.1/123.1/843.2/226.8 ms
    
  • CoreyCorey Member
    edited November 2012

    @Damian said: Looks like ioping is what I need to be concerned with, not dd testing...

    Either someone is hitting those disks every now and then REALLY hard...... or those disks are dying.....

    TO me - it looks like those disks are dying. Save your data while you can - and stop running disk tests because you are speeding up the inevitable :)

  • back up your data and give the host one more chance to correct the problem.

  • @KuJoe said: I have USB drives that do 20-30MB/s. That being said...

    Anything constantly below 100MB/s we investigate.
    Anything constantly below 50MB/s we replace (ask our blades how that worked out for them).

    That's a good rule of thumb for raid10 arrays. But some providers are still running some raid1 arrays with mechanical drives. (Us included though we are phasing those out.) Even with raptors in raid1 you barely get 100MB/s on a fresh node. Throw about 40 customers on there and it goes down to 50MB/s pretty quick (but still responsive and quick).

  • @Jack said: They have now upgraded all the new locations to RAID10 and the one's that you are on by the looks of it are still in RAID1.

    As I suspected, but still this is low.

  • That's definitely much better than the 7MB/s I've had with JollyWorksHosting. Seriously, worst hosting ever.

  • risharderisharde Patron Provider, Veteran

    @Damian I've been experiencing the 3x mb/s on a specific node from a specific provider for the past few days, EVERYTHING and I MEAN EVERYTHING is slow... 1 main drupal site with boost cache and mysql horribly slow. I didn't do an ioping, I'll do and confirm this, definitely I look for something AT LEAST 60 mb/s preferrably 100 mb/s

  • CoreyCorey Member
    edited November 2012

    I think we can all take from this - if you are having these sorts of problems with your provider it may not be necessarily oversold but you need to contact them about it. If the problems persist for days after you contact them about it then you need to find a new provider.

    When I was with burstnet the node going completely down was a common occurrence and contacting support didn't stop the issue. (This was before I got into the hosting business, and one of the reasons I got into the hosting business.)

  • @Corey said: If the problems persist for days after you contact them about it then you need to find a new provider.

    Days? that is being too kind.

  • MartinDMartinD Member
    edited November 2012

    One thing to note though, these 'dd' tests that everyone loves aren't really telling you much and don't give real representation of what's going on.

    Also, it tends to be people running 'dd' tests all the time that screws up performance on nodes!

  • risharderisharde Patron Provider, Veteran

    ./ioping -c 25 /
    4096 bytes from / ( ): request=1 time=0.2 ms
    4096 bytes from / ( ): request=2 time=61.0 ms
    4096 bytes from / ( ): request=3 time=30.9 ms
    4096 bytes from / ( ): request=4 time=105.5 ms
    4096 bytes from / ( ): request=5 time=46.6 ms
    4096 bytes from / ( ): request=6 time=57.1 ms
    4096 bytes from / ( ): request=7 time=213.8 ms
    4096 bytes from / ( ): request=8 time=114.9 ms
    4096 bytes from / ( ): request=9 time=78.9 ms
    4096 bytes from / ( ): request=10 time=506.3 ms
    4096 bytes from / ( ): request=11 time=162.7 ms
    4096 bytes from / ( ): request=12 time=218.0 ms
    4096 bytes from / ( ): request=13 time=605.0 ms
    4096 bytes from / ( ): request=14 time=540.9 ms
    4096 bytes from / ( ): request=15 time=77.3 ms
    4096 bytes from / ( ): request=16 time=911.0 ms
    4096 bytes from / ( ): request=17 time=124.3 ms
    4096 bytes from / ( ): request=18 time=94.9 ms
    4096 bytes from / ( ): request=19 time=123.2 ms
    4096 bytes from / ( ): request=20 time=25.3 ms
    4096 bytes from / ( ): request=21 time=248.0 ms
    4096 bytes from / ( ): request=22 time=571.0 ms
    4096 bytes from / ( ): request=23 time=900.7 ms
    4096 bytes from / ( ): request=24 time=1338.4 ms
    4096 bytes from / ( ): request=25 time=1069.7 ms

    --- / ( ) ioping statistics ---
    25 requests completed in 32254.1 ms, 3 iops, 0.0 mb/s
    min/avg/max/mdev = 0.2/329.0/1338.4/367.7 ms

  • CoreyCorey Member
    edited November 2012

    @JoeMerit said: Days? that is being too kind.

    I've been a provider for too long and learned what goes on behind the scenes :)...(but I just bought a buyvm 128 yesterday so I'll be a customer once again) if that is actually a disk issue they will have to (maybe) order a new disk and ship it to the DC to have it replaced by DC techs that take their time.

    If not and they have one on hand, the DC techs still take their time, and it might be a whole day before they go in there and replace that drive.

  • @risharde: you win...

  • risharderisharde Patron Provider, Veteran
    edited November 2012

    @Corey yes true, the provider is aware of the situation but said they are waiting for a new server to come in by the end of the week. They offered to transfer me over to a new node so I'm waiting patiently for this to happen - they said they're waiting on some ips. I think I'm just too lazy at this point in time to even think about transferring my files to another provider considering the IO is so low, it would take another day of waiting... I hope this is fixed soon, kind of running out of patience as I mentioned, its about 3 days now. If you visit my main site, you'll should be able to see how slow the response is to get to other pages etc.

    @MartinDD Yes true I've really heard a lot of people saying that the dd tests don't do an accurate job and just slows the node down further. How does this compare to ioping? Or is there something else that can really tell how bad or good the IO is?

    @JoeMerit True, well I'm just thinking they've got a lot on their plate and I got it dirt cheap so I guess its what I paid for. If it was a site I hosted for a customer, I would have switched in a snap but since its my own personal site and people won't die without the site, I have been leniant and more patient that usual.

    @Damian LOL Yeah... imagine how I feel... doing wget just to get ioping took more than 60 seconds lol ;)

  • Nick_ANick_A Member, Top Host, Host Rep

    If you don't absolutely need a VPS in that location, or if there's another host in that location, I'd pack my bags and get out of town immediately. That IOPing is dreadful.

  • @risharde said: @Corey yes true, the provider is aware of the situation but said they are waiting for a new server to come in by the end of the week. They offered to transfer me over to a new node so I'm waiting patiently for this to happen - they said they're waiting on some ips. I think I'm just too lazy at this point in time to even think about transferring my files to another provider considering the IO is so low, it would take another day of waiting... I hope this is fixed soon, kind of running out of patience as I mentioned, its about 3 days now. If you visit my main site, you'll should be able to see how slow the response is to get to other pages etc.

    @MartinDD Yes true I've really heard a lot of people saying that the dd tests don't do an accurate job and just slows the node down further. How does this compare to ioping? Or is there something else that can really tell how bad or good the IO is?

    @JoeMerit True, well I'm just thinking they've got a lot on their plate and I got it dirt cheap so I guess its what I paid for. If it was a site I hosted for a customer, I would have switched in a snap but since its my own personal site and people won't die without the site, I have been leniant and more patient that usual.

    @Damian LOL Yeah... imagine how I feel... doing wget just to get ioping took more than 60 seconds lol ;)

    Duno why they would replace the whole server - BUT that is a classic example of why it would take 'days'.

    Your website seems pretty fast to me right now.

  • I was getting 30mbps on chicagovps.net... SO yeah. :|

  • risharderisharde Patron Provider, Veteran
    edited November 2012

    @Corey I agree, well they said it was a disk failing so after the disk change they'd need to run some sort of check that drops the performance on the drives for 24-48 hours... it has improved a bit but that's because I'm using boost cache for drupal at the moment so it might be serving some cached pages which still uses io but probably less jam on the mysql queries. It's strange because the vps is a little more responsive but goes back sluggish after a while so I really am just guessing what's really going on based on what they are telling me. Either event, comparing this to my previous experiences, the site without boost cache wouldn't break a sweat on loadimpact so I'm not sure what's really the matter here lol

  • CoreyCorey Member
    edited November 2012

    @risharde said: @Corey I agree, well they said it was a disk failing so after the disk change they'd need to run some sort of check that drops the performance on the drives for 24-48 hours... it has improved a bit but that's because I'm using boost cache for drupal at the moment so it might be serving some cached pages which still uses io but probably less jam on the mysql queries. It's strange because the vps is a little more responsive but goes back sluggish after a while so I really am just guessing what's really going on based on what they are telling me. Either event, comparing this to my previous experiences, the site without boost cache wouldn't break a sweat on loadimpact so I'm not sure what's really the matter here lol

    Yea that sounds reasonable. Sounds like you have a great provider. Unlike @damian where they tell him 'that IO is fine there is nothing wrong'

Sign In or Register to comment.