Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Acceptable performance for a LEB (disk i/o)
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Acceptable performance for a LEB (disk i/o)

prae5prae5 Member
edited February 2013 in General

In general what would you think is an acceptable level of performance for a LEB? I'm hitting a performance issue with one of my boxes and I want to know if i'm expecting too much.

I've got a box I'm paying $7 with the following specs:
2TB Bandwidth
1024MB Guaranteed RAM
2048MB Burstable Ram
100GB Disk Space

I've been hitting a few issues recently and they seem to be being caused by poor disk i/o.

root@xxxxxxx:~/benchmark# ./ioping.sh
4096 bytes from . (simfs /vz/private/1642): request=1 time=479.5 ms
4096 bytes from . (simfs /vz/private/1642): request=2 time=84.4 ms
4096 bytes from . (simfs /vz/private/1642): request=3 time=15.3 ms
4096 bytes from . (simfs /vz/private/1642): request=4 time=12.7 ms
4096 bytes from . (simfs /vz/private/1642): request=5 time=13.8 ms
4096 bytes from . (simfs /vz/private/1642): request=6 time=19.1 ms
4096 bytes from . (simfs /vz/private/1642): request=7 time=20.9 ms
4096 bytes from . (simfs /vz/private/1642): request=8 time=0.1 ms
4096 bytes from . (simfs /vz/private/1642): request=9 time=33.3 ms
4096 bytes from . (simfs /vz/private/1642): request=10 time=0.1 ms

--- . (simfs /vz/private/1642) ioping statistics ---
10 requests completed in 9680.0 ms, 15 iops, 0.1 mb/s
min/avg/max/mdev = 0.1/67.9/479.5/139.1 ms

In my monitoring over the past few days I don't think I've seen it hit 1mb/s and typically seeing less than 50iops. dd is giving me about 15-18MB/s.

The issue is that because of the slow throughput, anything I do causes cpu load as everything causes iowaits. If i run top in one session and just do a ioping in another, it increases load by .5.

This isn't me chasing performance stats for the sake of it - I'm trying to improve real world issues - i.e add/deleting a wordpress post takes upwards of 30-40s.....

So, am I expecting too much or should I be pushing the host a little harder......

Comments

  • shovenoseshovenose Member, Host Rep
    edited February 2013

    Which host? That ioping is not so good.

  • prae5prae5 Member
    edited February 2013

    The host doesn't matter - no point in naming and shaming yet.

    In my opinion its very bad, however I want to see if I'm expecting too much before I really start pushing the provider. (that being said, those results are some of the better ones)

  • shovenoseshovenose Member, Host Rep

    Well, If saving a WordPress page is taking so long, I'd be asking the host to fix it. Have you opened a ticket and even just sent them a link to this thread? I mean, it might make more sense to ask them at least an opinion. Chances are if they're an LEB provider they'll even be able to respond here.

  • Have you tried opening a ticket and showing the provider these logs?

  • 1GB of RAM with 2GB more of swap? Hope not. That ratio is flawed and a sign I stay away from.

    ioping was just 10 tests... Let it run for 20 minutes and post what you get as the totals/averages.

    As is in 10 seconds, well could just be busy for some reason. 20 minutes worth of that and sure the disks are slow and/or oversold.

    30-40 seconds for a Wordpress action. Big problems there. Unsure why you aren't timing pages out sooner. That's horrendous.

  • nstormnstorm Member
    edited February 2013

    These are very bad stats. It's really doesn't matters if its LEB or not. Such bad performance shouldn't be happening.
    You should definately open a ticket with them and ask whats going on. It's not pushing, never hurts to ask if you feel like its not working well.
    If you need more ideas how good LEBs are performing - you can find a lot of stats of ioping around here.
    Here is my example of $3.21 LEB:
    22:31:29 up 131 days
    10 requests completed in 9004.4 ms, 2952 iops, 11.5 mb/s
    min/avg/max/mdev = 0.2/0.3/1.2/0.3 ms

  • I've had a ticket open with them for about a day and a half and have seen no improvement in that time.

    As is stands their last response was along the lines of they can't see any abuse on the node and they will keep monitoring it.

    For me thats not really good enough - i kind a feel fobbed off. I have asked if the node is significantly oversold or is the raid array degraded but haven't got a response.

    Before I pushed them any more, I was wondered if I was expecting too much - hence the original question.

  • shovenoseshovenose Member, Host Rep

    Simply ask them to move you to another node? They should be able to do that, and then you can test again and see if it's something that happens across all their service, or whether something is up with that node.

  • ioping should be like .01 to 1ms on a healthy node.

    You will of course see random blips. Those are fine.

    If there are tons of big blips and long lasting, then either slow disks, disk failure or non RAID.

  • @prae5 you definately are not expecting too much.
    This way beyond acceptable performance. As I've already mentioned it's not about low-end, much of respectable providers here offering services at the same or even lower rates, yet still they perform great.

  • rds100rds100 Member
    edited February 2013

    @pubcrawler said: ioping should be like .01 to 1ms on a healthy node.

    In reality ioping 0.1 to 1ms means that the read never came from disk, instead it came from cache (either SSD cache or RAM cache).

  • 15 iops yikes that is terrible. Glad it isn't my server I would run and hide.

  • @prae15 This isn't me is it? I have a few tickets about performance on a node but haven't had chance to get to them yet.

    If it is i'll sort them when I'm home.

  • @Jacob said: @prae5 This isn't me is it? I have a few tickets about performance on a node but haven't had chance to get to them yet.

    Nope not you.

    I'm going to push the host again. As I said, I don't want to name and shame yet - just i wanted to make sure I was being fair on them.

    I plenty of other leb's that preform many factors better - however just wanted to confirm this was unacceptable before I pushed them harder.

  • OK 1000 pings gives marginally better results, however still not even close to being acceptable imho.

    --- . (simfs /vz/private/1642) ioping statistics ---
    1000 requests completed in 1008779.4 ms, 103 iops, 0.4 mb/s

    Likewise dd during this period was less than stellar:

    3625582592 bytes (3.6 GB) copied, 627.821 s, 5.8 MB/s

    The node is running RAID 10 with 4 X 1 TB Hard Drives.

  • In reality ioping 0.1 to 1ms means that the read never came from disk, instead it came from cache (either SSD cache or RAM cache).

    True somewhat.

    Disks should be mostly idle, caching and queued activity. Living on a node with constant disk thrashing will prove horrific. That's where those sustained 72-3600ms iopings come from.

    Someone from the provider side should speak up about IOPs and limitations on disk IO by default in OpenVZ/Solus. I see a lot of providers with these lackluster ioping times sustained, one has to wonder if it might be provider set limits on the container.

  • 1 tb ide? lol! :P

  • jarjar Patron Provider, Top Host, Veteran

    Gets on my nerves when hosts don't monitor node performance. I mean I realize that I'm digging myself a hole with all the monitors and alerts that I actively keep tabs on, but isn't that what I signed up for? The VPS may be unmanaged, the NODE should be fully managed!

  • @jarland said: The VPS may be unmanaged, the NODE should be fully managed!

    Indeed. I've asked the host for an update and we'll see what happens. I uploaded a file at about 230KB/s and it resulted in..........

    --- . (simfs /vz/private/1642) ioping statistics ---
    10 requests completed in 10680.0 ms, 6 iops, 0.0 mb/s
    min/avg/max/mdev = 12.1/167.9/360.1/112.8 ms

    and a sever load over .5

    i think it may be time to move before this project is fully deployed.....

  • I agree providers should monitor their nodes better. I had a similar incident on one of my VPSes, I contacted support told them that someone was abusing the HDD, they kicked the user, but what was interesting is that the next day there was high disk load again. So I had to send a few tickets to finally get it to stop. The biggest problem with high IO is that a simple database query takes ages to complete and bottlenecks everything.

    How can providers limit the IO usage on VPSes to maintain fair share?

  • That's pitiful @prae5.

    230KB/s upload = .5 server load??!?

    ioping on that is horrendous.

    What sort of node specs is this company advertising?

  • prae5prae5 Member
    edited February 2013

    E3-1270v2, 32 GB Ram and 4 X 1 TB RAID 10 Hard Drives, 100mbps

    In fairness since my last reply to the ticket they seem to be being a little more proactive. They are contacting their DC to have them check hardware.

    A better response, but worries me they aren't monitoring this themselves.....

  • Nick_ANick_A Member, Top Host, Host Rep

    @prae5 said: E3-1270v2, 32 GB Ram and 4 X 1 TB RAID 10 Hard Drives, 100mbps

    Software RAID? Rented or colo'd?

  • I am real, umm apprehensive about dealing with providers that know little to nothing of their hardware and refer to the datacenter about such obvious suckage.

    Fly-by-night hosts suck.

  • @Nick_A said: Software RAID? Rented or colo'd?

    Pass, pass and pass ;-)

Sign In or Register to comment.