Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Creating VPSes with various virtual CPU's or CPU passthrough ?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Creating VPSes with various virtual CPU's or CPU passthrough ?

I am setting up a Proxmox cluster for selling VPSes in the future.
Working on various things, it all needs to come together, and it's slowly getting there.

But then... creating VPSes can be done with various CPU types, all offering various cpu flags, from almost nothing (kvm64) up to all that the host's processor can support (host), and everything inbetween.

I understand the idea behind it, from compatibility for moving VPSes over hosts to maximizing the speed for the customer (assuming more flags translates as higher processing speed if the VPS user optimizes everything correctly).

But how does this work in practice for a VPS hoster?

Is there is 'best choice' for the virtual CPU? Or pass through all host flags to the VPS?
Do any customers really optimize (compile all there software with the available CPU flags in mind)?
Or would it just be for bragging rights?

But also, what if you need to move a customers VPS to another host, and the flags of the new host do not line up with what the VPS got on the prev. host?

What does your experience say in this?

Comments

  • HxxxHxxx Member

    trade secret

  • moving vps usually isn't a problem, even if the flags differ. afaik you can't do a live migration in a cluster though.

    I'd always passthrough AES f.i. as it helps reduce the load, if the guests can make proper use of it. for VMX (nested full virt) it might be debatable if needed and the ressource overhead nested virt can cause. the rest of the flags are probably not sooo interesting anyway.

    Thanked by 1Jorge
  • jackbjackb Member, Host Rep
    edited August 2019

    @Falzo said:
    moving vps usually isn't a problem, even if the flags differ. afaik you can't do a live migration in a cluster though.

    Correct, live migration is unsupported when using strict match or host model with the two hosts having different CPUs (strict) or cpu families with missing flags (host-model).

    Usually host-model is the best to set imo. That way you have the option of live migrating between two similar (but not identical) hosts if you need it, but the client also gets significantly more flags than the qemu virtual CPU.

    Thanked by 2Falzo Jorge
  • LeviLevi Member

    Basically:

    • If you want restrictions and know what are you doing: use CPU flags.
    • If you want no restrictions / easy migration to entirely different hardware - use passthrough.
  • Falzo said: moving vps usually isn't a problem, even if the flags differ. afaik you can't do a live migration in a cluster though.

    Since we've set up our cluster recently from scratch, we have used the latest Proxmox 6, and here live migrations are supposed to be supported to other nodes inside the same cluster.
    I can't confirm, for I have not mangaged to actually do a live migration yet, somethings fails with an error, but the documentation promisses it should be possible.

    jackb said: host-model is the best to set imo

    I like offering my clients full options, especially since we'll have relatively few clients per server. We're planning on assigning dedicated resources to every client, so they can max out whatever they want without hindering anyone. Although bandwidth would have to be a shared resource.

    But that CPU flags would not be an issue for a migrating a VPS to a different server (not a node in the same cluster, for all servers in the cluster are build on similar hardware) surprises me. That's where I was expecting problems. I guess I'll have to do some testing on that, and see what comes up.

  • jackbjackb Member, Host Rep
    edited August 2019

    @Jorge said:
    But that CPU flags would not be an issue for a migrating a VPS to a different server (not a node in the same cluster, for all servers in the cluster are build on similar hardware) surprises me. That's where I was expecting problems. I guess I'll have to do some testing on that, and see what comes up.

    I am fairly sure there will be a problem for live migration between hosts on different cpu families (when host-model) or with less flags available - even in the same family (strict). Offline migration should have no problem.

    Thanked by 1Jorge
  • I always do host cpu passthrough, so the client can see if he actually has the cpu they paid for.

    Otherwise you'll just see qemu.

    Thanked by 2Jorge cybertech
  • jsgjsg Member, Resident Benchmarker

    @Jorge

    FWIW: I'm also confronted with that issue but from a different perspective (developer) and in my field (security and crypto) flags can make a very major difference in runtime speed.

    The best compromise I found so far is to compile for intel Nehalem which is old enough to support almost all systems in actual use but at the same time young enough to have a reasonable set of flags (e.g. SSE version which is important).

    Speaking from the customer side I strongly prefer to have the real CPU passed through but I understand that hosters need migration capabilities and hence might prefer to choose a limited vCPU.
    I'm probably not the average customer but I do look at the vCPU I get and again (also there) I expect Nehalem as a minimum. The only exception are very cheap (<=15€/yr) VPSs.

    Thanked by 2Jorge uptime
  • and as a side-note: I can now confirm that on Proxmox 6.0 a VPS can live (!) be migrated from one node to another node inside a cluster. The error I mentioned before was due to the firewall blocking the filesystem transfer (on local storage on the node) through nbd over TCP port 60000 to the receiving node.

Sign In or Register to comment.