New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I dont think that you will get a true value. Why should a provider say: Ok tbh, 200 is our limit
Wont be good for their image.
I think though I am not sure, XenHVM has 512 max while XenPV is 256 ? Correct Me If I am wrong ? Anyone?
one important thing is the Node HW spec, especially the RAM I think, such as 32G RAM node or 128G RAM node.
and i think this question is he business secret?
@yywudi I thought CPU was the final limit, not the RAM. 1 Xeon across 200 clients I think is overkill.
Miami the max is 64
Phoenix 100
Netherlands 100
UK 100
Germany 32
Miami node's are dual hex, 128GB Ram, 8 disk raid 10
Phoenix, Netherlands, UK quad core, 32GB Ram, 4 disk raid 10
Germany, quad core, 32GB Ram, Raid 5
There you go
That's 68Mhz dedicated to every user on an E3, that's a shit ton for idle VPS as bash generally uses... 1Mhz?
Not to mention other people can use the unused CPU time. There is no limit on CPU, it can only get a bit slow when the scheduler has to cut shares.
@AnthonySmith
How many on LES?
haha
quad core (no HT), 4 or 8GB RAM, single disk, This is OpenVZ though my previous numbers are on Xen.
Example NL LES Node:
So I think this node would take another 1 - 200 containers no problem
This does all depends on the hardware as others have said; however on our typical setups we have an arbitrary limit of 60 VPS per node.
The above limit is a reference however and really the number should be based on historical metrics for that node and the users on that node. If we find at 20VPS this node is getting to the limits of its performance based on the usage profile of the clients on there we will cap that node.
@AnthonySmith
Thanks for the inforamtion. Glad to see that you provide a transparent service
I think the real limit is the I/O performance. We put max. 45 VMs per Node (KVM), with more VMs you´ll got a slowdown of the I/O performance (8x HDDs in a Raid10). You need SSDs for more VMs on a node with max 8x HDDs.
Depend on HW space our minimum 44 VPS and 55 VPS Maximum per node.