Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How many VPSs per box?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How many VPSs per box?

Hello everybody,

I want opinions regards how many VPSs I can host on a server.
The infrastructure is described below:

Server)
- 2 physical Xeon processors (8 HT-cores each, totalizing 32 cores)
- 128GB RAM, (2 HD raid-1 just for Hypervisor install),
- 2 network cards 1Gb/s.
- Redundant Power Supplies

Hypervisor: KVM

The VPSes will be stored on a external Ceph pool.

I'm thinking about 120 VPSs with this especs:

VPS)
- 1 Gb RAM
- 1 vCPU
- 50 GB Storage

Comments are appreciated.

Comments

  • @DanielBRSP said:
    Hello everybody,

    I want opinions regards how many VPSs I can host on a server.
    The infrastructure is described below:

    Server)
    - 2 physical Xeon processors (8 HT-cores each, totalizing 32 cores)
    - 128GB RAM, (2 HD raid-1 just for Hypervisor install),
    - 2 network cards 1Gb/s.
    - Redundant Power Supplies

    Hypervisor: KVM

    The VPSes will be stored on a external Ceph pool.

    I'm thinking about 120 VPSs with this especs:

    VPS)
    - 1 Gb RAM
    - 1 vCPU
    - 50 GB Storage

    Comments are appreciated.

    I think you may find:

    2 network cards 1Gb/s.

    To be your bottleneck in this case. What kind of IO are you looking to achieve?

  • maybe a few hundred. overselling can go to great lengths. i remember once i had nearly 70 vps on a SC1425

  • Virtovo said: To be your bottleneck in this case. What kind of IO are you looking to achieve?

    It's difficult to plan the demand, but I initially thought in, at least, a constant 5 IOPS per VPS.
    You think that 2 nics are not enough?

    TarZZ92 said: maybe a few hundred. overselling can go to great lengths. i remember once i had nearly 70 vps on a SC1425

    That's cool. What was the server specs? RAM, CPUs, etc? Did you use local storage?

  • raindog308raindog308 Administrator, Veteran

    said: 2 physical Xeon processors (8 HT-cores each, totalizing 32 cores)

    No, totaling 16 cores.

    HT may give some performance boost (+30% best case) but 1 HT core != 1 physical core by any stretch of the imagination.

    Thanked by 1Makenai
  • @DanielBRSP said:
    a constant 5 IOPS per VPS.

    5 IOPS = Approximately 20 MB/s. That's not nice.

    Thanked by 1linuxthefish
  • DanielBRSP said: That's cool. What was the server specs? RAM, CPUs, etc? Did you use local storage?

    Dual core, 8GB, 2TB HDD

  • raindog308 said: 1 HT core != 1 physical core

    You are absolutely right, but, do you think this is not a good ratio?

    forthcloud said: 5 IOPS = Approximately 20 MB/s. That's not nice.

    Can you clarify your math?
    What would you aspect from a VPS in terms of IOPS? Please consider that I'm not offering a VPS for storage purposes

    Thanks for all the comments.

  • perennateperennate Member, Host Rep
    edited June 2014

    It depends what you plan on running on the virtual machines. If you just plan on creating idle VMs, you can have hundreds of them, just make sure to add script so that it doesn't use up the memory or use memory ballooning; of course that'd be kinda pointless.

    Edit: and I agree with @Virtovo, if one of the network cards is for VM network, then you only have 1 gbit I/O speed total, and that's in best case.

  • perennate said: It depends what you plan on running on the virtual machines.

    I'm planning to sell them, so I can't exactly predict the I/O usage. My plan is to establish a minimum IOPS per VPS and keep monitoring and tracking the use, eventually upgrading the network/Ceph infrastructure.

    perennate said: then you only have 1 gbit I/O speed total, and that's in best case.

    In that case, should I bond 2 or 4 nics in order to gain 2 Gb/s or 4Gb/s throughput? Or a 10Gb/s will do?

  • 10Gbps is the way to go for storage network in modern world.

  • perennateperennate Member, Host Rep

    DanielBRSP said: In that case, should I bond 2 or 4 nics in order to gain 2 Gb/s or 4Gb/s throughput? Or a 10Gb/s will do?

    Before you upgrade the port, make sure that your Ceph cluster can handle that much.

  • raindog308raindog308 Administrator, Veteran

    said: The VPSes will be stored on a external Ceph pool.

    I'm thinking about 120 VPSs with this especs:

    VPS) - 1 Gb RAM - 1 vCPU - 50 GB Storage

    What is

    • the Ceph config (disk specs and RAID)
    • your VPS server's connection to it?

    I'm curious if you mean 2x1Gbps NICs = both network I/O and storage I/O on the same NICs.

  • raindog308 said: the Ceph config (disk specs and RAID)

    As far as I can remember, we have three HP ProLiant DL360p Gen8, 8 SAS-HDD 15K 600GB using RAID10 and 2 x 10Gbps nic each. (tota storage per node = 2.4TB)

    raindog308 said: your VPS server's connection to it?

    The server have 2 x 1 Gbps NICs.

    raindog308 said: I'm curious if you mean 2x1Gbps NICs = both network I/O and storage I/O on the same NICs.

    At first I meant 1Gbps NIC for storage, but now, I'm considering bonding 2 x 1Gbps or even a 10Gbps NIC.

    Please, correct me if I'm wrong. The Ceph pool is more than capable to sustain just 1 server like the one I've described, the plan is to test the VPS per server ratio and then add more servers.

  • raindog308raindog308 Administrator, Veteran

    DanielBRSP said: The Ceph pool is more than capable to sustain just 1 server like the one I've described, the plan is to test the VPS per server ratio and then add more servers.

    I'm wondering what a Ceph cluster in this case really buys you. Let's say you got to 3 Ceph nodes + 3 VPS nodes. Is that really more profitable than just 6 VPS nodes with internal storage? Would simply making those Ceph nodes into VPS nodes make more sense?

    I understand the attraction of shared storage, pooling free space, centralized admin, etc...just wondering if it really makes financial sense.

  • fileMEDIAfileMEDIA Member
    edited June 2014

    raindog308 said: I'm wondering what a Ceph cluster in this case really buys you. Let's say you got to 3 Ceph nodes + 3 VPS nodes. Is that really more profitable than just 6 VPS nodes with internal storage? Would simply making those Ceph nodes into VPS nodes make more sense?

    You can use the Ceph nodes for vps hosting too. Proxmox for example support this out of the box. You need only more RAM for handling the I/O. But the 2x 1 Gbit/s ports do not make fun in this case. This ports must be at least 10 Gbit/s or more.

    We using it with Infiniband QDR (40 Gbit/s) for the storage part because it is cheaper than ethernet in this performance range. Ceph pool is a cluster with at least 3x SE326M1 + MSA70 and you will get around 3 GB/s over Infiniband.

  • @DanielBRSP said:
    - 2 physical Xeon processors (8 HT-cores each, totalizing 32 cores)

    16 cores with 32 threads dude.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited June 2014

    on KVM, if you have the 2 SSD'd for your OS and large swap then 120 - 150 ish, but I would sooner make 50% less profit and sell 32 might bigger plans at a killer price to achieve stability and reduce churn due to performance.

  • @AnthonySmith said:
    on KVM, if you have the 2 SSD'd for your OS and large swap then 120 - 150 ish, but I would sooner make 50% less profit and sell 32 might bigger plans at a killer price to achieve stability and reduce churn due to performance.

    I'd agree with this. I wouldn't sell no where near 150 on 1 container.. Try to make a package that you'd be sure to profit from.

    if you oversold your server you'll have problems later on.

  • @DanielBRSP said:
    Can you clarify your math? What would you aspect from a VPS in terms of IOPS? Please consider that I'm not offering a VPS for storage purposes

    What if somebody runs a database on the VPS?

  • WebProjectWebProject Host Rep, Veteran

    2 HD raid-1

    it will be pain in the neck if you used normal hard drives, as users will suffer very slow speed.

  • raindog308 said: I'm wondering what a Ceph cluster in this case really buys you

    You are right, financially speaking, it's not the best option, but how about live migration, centralized snapshots, possibility to expand faster and all the other things that you've said?

    fileMEDIA said: We using it with Infiniband QDR (40 Gbit/s) for the storage part because it is cheaper than ethernet in this performance range. Ceph pool is a cluster with at least 3x SE326M1 + MSA70 and you will get around 3 GB/s over Infiniband.

    Unfortunately, I have no knowledge about Infiniband, so we need to use 10 Gpbs Ethernet NIC. Your Ceph cluster is quiet impressive, what is your total storage capacity? How many VPS or VMs are you serving with it?

    AnthonySmith said: on KVM, if you have the 2 SSD'd for your OS and large swap then 120 - 150 ish, but I would sooner make 50% less profit and sell 32 might bigger plans at a killer price to achieve stability and reduce churn due to performance.

    That's a business decision. I expect to be able to provide different types of VPS (faster I/O, for example) and charge more for then. But, your SSD statement implies that I should use internal storage, right? I don't see how I could achieve the SSD speed with Ceph unless I

    forthcloud said: What if somebody runs a database on the VPS?

    Exactly, the problem is the "what if". That's why I'm trying to set a minimum IOPS per VPS, otherwise the cost will be too big. If I establish a minimum of 10 IOPS per VPS, some will run on 5, some 20...

  • WebProject said: it will be pain in the neck if you used normal hard drives, as users will suffer very slow speed.

    These are just for Hypervisor installation. VPSs will be stored on Ceph pools.

  • raindog308raindog308 Administrator, Veteran

    DanielBRSP said: You are right, financially speaking, it's not the best option, but how about live migration, centralized snapshots, possibility to expand faster and all the other things that you've said?

    The question is if that is a significant differentiator in the marketplace. Those things may make your life easier but as a consumer, they wouldn't make me choose you. At some scale, these things become important for manageability, but whether they're that important starting out...I don't know. How fast will you fill enough nodes that these things become important?

    DanielBRSP said: I don't see how I could achieve the SSD speed with Ceph

    I think you're going to have slow disks speed, competing in a world that is moving to SSD.

    But it's just my opinion. At least you're planning ahead and have a better idea of infrastructure than "dude, let's rent a dedi", so I respect that. Good luck!

  • MaouniqueMaounique Host Rep, Veteran

    DanielBRSP said: As far as I can remember, we have three HP ProLiant DL360p Gen8, 8 SAS-HDD 15K 600GB using RAID10 and 2 x 10Gbps nic each. (tota storage per node = 2.4TB)

    Your storage is insanely expensive on the computing side. I mean, gen8 CPUs for storage? 10 gbps can be handled by any gen 5, IMO, you basically need a big case for the disks with some raid card and at most 4 cores attached. You may wish to double that to be on the safe side, but your current config will have the cpu at some 0.0x most of the time, especially with a good raid card.

  • DanielBRSP said: Unfortunately, I have no knowledge about Infiniband, so we need to use 10 Gpbs Ethernet NIC. Your Ceph cluster is quiet impressive, what is your total storage capacity? How many VPS or VMs are you serving with it?

    IPoIB isn't really hard to configure. You need only the IPoIB drivers and then you will get a normal NIC with IP address. One Cluster using 48x 512GB SSDs and the other 44x 1TB HDDs + SSDs Journaling disks for the storage pool. A cluster hosting around 800 VMs.

  • raindog308 said: I think you're going to have slow disks speed, competing in a world that is moving to SSD.

    That is something to consider, maybe, as I said before, a SSD VPS would be a different option in my business spectrum.

    raindog308 said: At least you're planning ahead and have a better idea of infrastructure than "dude, let's rent a dedi", so I respect that. Good luck!

    I appreciate all your comments. Although I consider myself a technical person, there are some things that I want to know better before launch a VPS provider. I still want to have a better understanding of the typical VPS user profile, his needs and expectations. When I get those answers, then I'll move on ;)

    @Maounique Thanks for your comment. I'll reconsider the HW spec.

    fileMEDIA said: IPoIB isn't really hard to configure. You need only the IPoIB drivers and then you will get a normal NIC with IP address

    But even using IPoIB, I will need to use HCAs and IB switches/cables right?

    fileMEDIA said: 44x 1TB HDDs + SSDs Journaling disks

    Ok, two questions about that:

    • Is utilizing Ceph journaling with SSD the same think of a Ceph Cache pool (writeback)?

    • Considering your 44 x 1TB HDDs + SSDs Journaling disks, what is your average IOPS per VPS?

  • DanielBRSP said: But even using IPoIB, I will need to use HCAs and IB switches/cables right?

    Yes sure, it's the same problem as for 10G ethernet. But Infiniband equipment is much more cheaper to get.

    DanielBRSP said: Is utilizing Ceph journaling with SSD the same think of a Ceph Cache pool (writeback)?

    Considering your 44 x 1TB HDDs + SSDs Journaling disks, what is your average IOPS per VPS?

    Journaling and Cache pool are not the same. Read the documentation.

    Do the math.. 44 * 100 / 800 = 5.5 IOPS without caching.

    I think you should contact a consultant for this questions. You cannot deploy it probably when you must ask people here in this forum.

  • DanielBRSPDanielBRSP Member
    edited June 2014

    fileMEDIA said: Do the math.. 44 * 100 / 800 = 5.5 IOPS without caching.

    I think you should contact a consultant for this questions. You cannot deploy it probably when you must ask people here in this forum.

    Well, I was hoping you could tell me your IOPS considering the SSD journaling.

Sign In or Register to comment.