Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Shouldn't the virtualization platform restrict load/cpu abuse?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Shouldn't the virtualization platform restrict load/cpu abuse?

There seems to be a constant struggle between users and host providers. Providers provide vague load requirements, and honest users strive to keep their systems under these vague requirements.

But, it seems that the virtualization platform could/should be able to automatically scale back a user who is consuming too much cpu (and such). While I have never ran a host before, a brief look at the OpenVZ options seems to provide a number of parameters to set limits on a user's cpu usage. Do these options not really work? Am I over simplifying it?

Let me give an example. On the Java the JVM, we can launch a bunch of threads. If we launch 10 threads, all 10 threads will get (for the most part) an equal time slice of the JVM. We don't need to worry about it, it just happens. An individual thread, unless it does something funky like changing priorities, can't get more JVM cpu time, as it is not within its control. The JVM provides a relatively fair share.

While a VPS might be a more complex ecosystem than a single JVM, I would think the same would be true. Even if a container wants more cpu (etc), the virtualization platform should limit it. No? Or is this outside of the ability of our current platforms?

In another example, I saw a form where a top LET provider was reprimanding an abusive user for using 6 cores when he was only allocated 4. What? How is this even possible?

Comments

  • Not really. what you'll see is some people not reading terms of services and having too much expectations for the prices they pay.

    AdamM said: There seems to be a constant struggle between users and host providers.

  • jvnadrjvnadr Member
    edited January 2016

    It's not that simple. Let's assume that you put 32 vms in a node with 4 cores. This is not an oversold node, but for most LET providers, a normal one. If you put an upper limit according with the sum of the divide of the vms to the cores, then, you should give 12,5% from a single core as maximum load to each vm.
    In this scenario, when a vm will have to use some more cpu power for a small amount of time, then, it will be slowed down or tasks could not be run.
    So, the right direction is to allow to a virtual machine to use a lot more of power for a small period, and let it to return to normal usage later. It is almost impossible all the virtual machines have spikes to their load at exactly the same time.
    If a user of the vm realize that his vps is constantly uses a lot of power breaking the fair share policy, then, he should use apps like cpulimit to give himself limit to the power he uses.

    AdamM said: In another example, I saw a form where a top LET provider was reprimanding an abusive user for using 6 cores when he was only allocated 4. What? How is this even possible?

    I think that is not possible. If you assign a specific number of cores in a virtual machine, then, the vm will always use all of those cores but with a very limited load. This is proven if you simple run top or htop. You can see there how many (virtual) cores does the machine have.

  • raindog308raindog308 Administrator, Veteran

    AdamM said: While I have never ran a host before, a brief look at the OpenVZ options seems to provide a number of parameters to set limits on a user's cpu usage. Do these options not really work? Am I over simplifying it?

    I think CPU is more straightforward than I/O.

    I have not admin'd KVM or OvZ much, but I don't they offer the same kind of I/O balancing/QoS controls that VMware and bigger ($$$) products do. But even then, these sorts of policies go so far if your box is grossly oversold. I/O is usually the biggest bottleneck because (a) most people don't use all the CPU but everyone uses lots of I/O, (b) there are different types of I/O, and (c) there's less I/O.

    OTOH, some of the big managed providers (e.g., WiredTree) have very few limits in place - e.g, my WT node is allowed fair-share on 32 cores...but so is everyone else. Only a problem if the node is really oversold, and they don't. (They also have dedicated core plans, etc.)

  • @raindog308

    So, what I am getting from these answers is that it is not primarily a cpu problem, but an IO problem that leads to high “loads” on a vps. When you say IO are you primarily talking about reading from the disk (Dbs, static files, etc) or also the network IO involved with receiving an HHTP request and sending the response over the network?

    If the IO bottle neck is primarily the disk, then I guess the solution to high loads is aggressive caching at the DB and file layer... correct?

  • @AdamM said:
    raindog308

    So, what I am getting from these answers is that it is not primarily a cpu problem, but an IO problem that leads to high “loads” on a vps. When you say IO are you primarily talking about reading from the disk (Dbs, static files, etc) or also the network IO involved with receiving an HHTP request and sending the response over the network?

    If the IO bottle neck is primarily the disk, then I guess the solution to high loads is aggressive caching at the DB and file layer... correct?

    Indeed in my testing, disk I/O is being hit first before the CPU limit is even close to being reached - this not only happens on VPS's but on dedicated servers - up to now one thing that has helped is converting databases from InnoDB to MYISAM.

  • jvnadrjvnadr Member
    edited January 2016

    AdamM said: So, what I am getting from these answers is that it is not primarily a cpu problem, but an IO problem that leads to high “loads” on a vps. When you say IO are you primarily talking about reading from the disk (Dbs, static files, etc) or also the network IO involved with receiving an HHTP request and sending the response over the network?

    In an OpenVZ all resources are shared. Even in KVM, a lot of resources are shared (not disk or memory space but the capacity of the node). So, there is not a simple answer in a question like what can lead to high loads on a vps.
    For example, if you create a KVM vm with low memory (e.g. 128MB) and lead the machine to use constantly hard disk as swap, then, if you multiply it to several virtual machines on a node, this can struggle the HDD performance.
    Usually, HDD can reach limits more quickly than CPU in a node. So, that's why vps providers are selling their products with a fair share policy. That means, you can have a normal use in your machine but if you try to push tasks to the limit in a low cost vps, then, you will be kicked out.

  • AdamM said: While I have never ran a host before, a brief look at the OpenVZ options seems to provide a number of parameters to set limits on a user's cpu usage.

    Join the party and plan your summerhost run. You'll get all kind of abuse

  • You can have scripts/stuff on the back end to keep an eye on it but thousands of SMTP processes firing up, completing and dying will run a node down really quick. I've seen it. Then you start checking your tools (not digging through containers before assumptions start flying) to check for outgoing SMTP or other annoyances.

    Bringing back public executions in town square would stop the spam problem

  • raindog308raindog308 Administrator, Veteran

    Keep in mind we're talking about general-purpose hardware with some virtualization-easing features added to the CPU. The motherboard, RAM, disk, etc. are not built for virtualization.

    There are boxes (e.g., IBM pSeries) where the hardware is purpose-built for virtualization so you get separation and resource protection down to the electrical level in some cases.

    You won't see that kind of resource protection on general-purpose commodity servers, though obviously for many many use cases, what you can get is good enough.

    Thanked by 1vimalware
  • ratherbak3dratherbak3d Member
    edited January 2016

    If we set hard restrictions, users will then complain that the VM is not as fast as others they have used (Which have fair share). So for a change, the provider can't win in this sense.

    This is exactly the reason we provide a guaranteed amount, but don't restrict users too it. Then if you catch our attention for whatever reason, we can then ask you to lower usage to those levels.

    Thanked by 1doughmanes
Sign In or Register to comment.