All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Creating VPSes with various virtual CPU's or CPU passthrough ?
I am setting up a Proxmox cluster for selling VPSes in the future.
Working on various things, it all needs to come together, and it's slowly getting there.
But then... creating VPSes can be done with various CPU types, all offering various cpu flags, from almost nothing (kvm64) up to all that the host's processor can support (host), and everything inbetween.
I understand the idea behind it, from compatibility for moving VPSes over hosts to maximizing the speed for the customer (assuming more flags translates as higher processing speed if the VPS user optimizes everything correctly).
But how does this work in practice for a VPS hoster?
Is there is 'best choice' for the virtual CPU? Or pass through all host flags to the VPS?
Do any customers really optimize (compile all there software with the available CPU flags in mind)?
Or would it just be for bragging rights?
But also, what if you need to move a customers VPS to another host, and the flags of the new host do not line up with what the VPS got on the prev. host?
What does your experience say in this?
Comments
trade secret
moving vps usually isn't a problem, even if the flags differ. afaik you can't do a live migration in a cluster though.
I'd always passthrough AES f.i. as it helps reduce the load, if the guests can make proper use of it. for VMX (nested full virt) it might be debatable if needed and the ressource overhead nested virt can cause. the rest of the flags are probably not sooo interesting anyway.
Correct, live migration is unsupported when using strict match or host model with the two hosts having different CPUs (strict) or cpu families with missing flags (host-model).
Usually host-model is the best to set imo. That way you have the option of live migrating between two similar (but not identical) hosts if you need it, but the client also gets significantly more flags than the qemu virtual CPU.
Basically:
Since we've set up our cluster recently from scratch, we have used the latest Proxmox 6, and here live migrations are supposed to be supported to other nodes inside the same cluster.
I can't confirm, for I have not mangaged to actually do a live migration yet, somethings fails with an error, but the documentation promisses it should be possible.
I like offering my clients full options, especially since we'll have relatively few clients per server. We're planning on assigning dedicated resources to every client, so they can max out whatever they want without hindering anyone. Although bandwidth would have to be a shared resource.
But that CPU flags would not be an issue for a migrating a VPS to a different server (not a node in the same cluster, for all servers in the cluster are build on similar hardware) surprises me. That's where I was expecting problems. I guess I'll have to do some testing on that, and see what comes up.
I am fairly sure there will be a problem for live migration between hosts on different cpu families (when host-model) or with less flags available - even in the same family (strict). Offline migration should have no problem.
I always do host cpu passthrough, so the client can see if he actually has the cpu they paid for.
Otherwise you'll just see qemu.
@Jorge
FWIW: I'm also confronted with that issue but from a different perspective (developer) and in my field (security and crypto) flags can make a very major difference in runtime speed.
The best compromise I found so far is to compile for intel Nehalem which is old enough to support almost all systems in actual use but at the same time young enough to have a reasonable set of flags (e.g. SSE version which is important).
Speaking from the customer side I strongly prefer to have the real CPU passed through but I understand that hosters need migration capabilities and hence might prefer to choose a limited vCPU.
I'm probably not the average customer but I do look at the vCPU I get and again (also there) I expect Nehalem as a minimum. The only exception are very cheap (<=15€/yr) VPSs.
and as a side-note: I can now confirm that on Proxmox 6.0 a VPS can live (!) be migrated from one node to another node inside a cluster. The error I mentioned before was due to the firewall blocking the filesystem transfer (on local storage on the node) through nbd over TCP port 60000 to the receiving node.