Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Hundreds or thousands of other people on your server?

Hundreds or thousands of other people on your server?

DamianDamian Member
edited August 2012 in Providers

Notwithstanding CPU, disk i/o, and memory issues, how would you feel if you knew or you discovered your VPS container was on a server shared with several hundred and up to a thousand other VPS containers?

Like I said, I'm not really that concerned for feelings about resources issues, but more interested to see what people's emotions are on the subject. For the scope of the question, consider that you would never have cpu/disk/memory issues.

I am no longer affiliated with IPXcore.

Comments

  • TazTaz Disabled
    edited August 2012

    If there is no issue, why should I worry? It would be same as saying, How would you feel if your dc put your server in a room shared with hundred more servers.

    Time is good and also bad. Life is short and that is sad. Dont worry be happy thats my style. No matter what happens i won't lose my smile!

  • are you trying to tell us something about your services?

  • Why would I care how many people are on the node if the performance is good?

    Disclosure: I work for Query Foundry LLC.
    I own DA International Group Ltd.
  • HeinzHeinz Member
    edited August 2012

    Without regular problems I would feel better than I feel with lets say often failed sapphire node.

    You have your way. I have my way. As for the right way, the correct way, and the only way, it does not exist. ~ Friedrich Nietzsche

  • As I was relating in IRC just now, when I started following LEB/LET, people would get up in arms about a VPS node having more than 40 or 50 containers. That was back in the days of single-core Xeons, and that's not the case anymore.

    Now that we're getting to the point where servers easily have 24 threads or more, 15k SAS drives or SSD drives, and with DDR3 being so cheap, what's the point that it becomes 'too much', not from a resources standpoint, but from a too-many-eggs-in-one-basket standpoint.

    @JoeMerit said: are you trying to tell us something about your services?

    Somewhat. I was looking at servers today, and noticed that there were some with 96 to 192gb of ram in our price range, and was thinking about how 'high-density' it is. But with our marked lack of network support when bad things happen, I'd really rather not have 1000+ people beating down our virtual door for answers when a single node becomes inaccessible.

    I am no longer affiliated with IPXcore.
  • ipxadamipxadam Member
    edited August 2012

    @Damian said: I'd really rather not have 1000+ people beating down our virtual door for answers when a single node becomes inaccessible.

    ahaha good way of putting things

    IPXcore fast virtual servers located in United States.

  • u4iau4ia Member

    So long as I get my fair share of resources when I need them, I don't care how many neighbors I have. My guess is, Disk I/O is the biggest bottleneck, with high end 15k SAS or SSD's, how many can realistically share? Has anyone ever come up with a ballpark set of statistics?

  • @NInjahawk quite different actually. You know for a fact, that with a room filled with servers, the only bottleneck is the network / switch speed and capacity of the network and power.

    When you compare that to a provider who's with a few servers in a DC, you have to worry about the density of the DC (usually, fine, right? OVH can cram 36K servers in their new CA location fine, right?).

    What poses the question, is, how much is too much, as @Damian is trying to ask. I don't see a problem, aslong as resources, and the entire system is stable and allocated resources are usable, that's not reallly a problem. What is a worrier is how much load and strain the server will be given, I can imagine hard drives not lasting their intended life span if you have 1000 OS's running on it, doing their "crons" and "scheduled" tasks, and whatever the damn server is being used for.

    You could also put this into somewhat, perspective of how amazon is doing, their cloud based clearly, but at the same time, their popularity, will have HUNDREDS of THOUSANDS of people using the same services. You don't know how much that previous user has raped that server, or how much load any server you get, has been treated.

    But, decent providers with knowledge will be able to detect any "problems" with allocation and lifespan on disks and replace / reshuffle users well.

    Also, @Damian -- Can't you get your own IP range now with 1000+ users? :P

  • TazTaz Disabled

    @eastonch damian did mention that if it doesn't affect your vps.

    Time is good and also bad. Life is short and that is sad. Dont worry be happy thats my style. No matter what happens i won't lose my smile!

  • On a server with 192 gigs of RAM for a VPS node there would be a variety of bottlenecks. Starting from HDDs, network, getting to the CPU that would most likely be a bottleneck much before the RAM is filled, even with 24 threads. I am quite sure that investing in the proper hardware to get a monster VPS node up and running would cost much, much more than actually buying a couple of lower-end servers that at the end will handle more customers with better performance.

    Disclosure: I work for Query Foundry LLC.
    I own DA International Group Ltd.
  • Virtualization on a single hardware node doesn't scale infinitely.

    Rebooting a monster node is gonna take time as each vps/container gets started. And god forbid it's OVZ/Virtuozzo booting after an unclean shutdown, and quotas have to be rebuilt.

    Or it's time for a fsck.

  • I would personally feel more secure. It means the host has many customers, it is a stable one (well, except for freebies), can scale costs down per customer and is likely to last. Also, they wouldnt let the node unsupervised, the more we are, the more likely is to get DDoS or something and needs constant supervision, abuse will be detected and limited fast, etc. Granted, it is hard to do this, so the host must know it's s**t. Usually you dont get that many customers, you will fail faster if you dont know things, only if superlucky... Drawbacks: -frequent abuse and small downtimes of network or io spikes; -reboots will take a lot. Advantages: -mature host, on for the long time; -good supervision; -there is safety in numbers :)

    Getting back to numbers and out of personal preference, I think a server with more than, say 64 GB of ram running small containers which are not idle, should hit a wall at about 500 because of the bottleneck with IO. With extremely good hw probably 128 GB ram and 1 k containers is possible if we talk OVZ. In real life, most of the small resource LEBs are either idle or used as vpns which do not use much io, as such, I wont be surprised to see more than 1k on a good server, perhaps even 2 k. M

    Who's General Failure, and why is he reading my drive A: ?

  • TazTaz Disabled

    @Maonique Dozens of SSD on Raid10 sounds like a good HD config :P

    Time is good and also bad. Life is short and that is sad. Dont worry be happy thats my style. No matter what happens i won't lose my smile!

  • MaouniqueMaounique Member
    edited August 2012

    Yes, and containers with 2 gb space will be able to cram in up to 2 k i think. I think ppl should stop worry about this, as any other thing, theare are good and bad apsects with any decision, if it was best one way or the other, we wouldnt have had such a huge choice of plans. M

    Who's General Failure, and why is he reading my drive A: ?

  • Put as many you like, when you notice the first bottleneck. May it be RAM, CPU or disk, stop adding containers. Simple as that.

    http://www.lowendguide.com/ - the guides to administer your lowend vps | Make money writing tutorials
    Free CPanel Shared Hosting Locations: Miami (US) | Rotterdam (NL)
  • JarJar Member

    I welcome "different" solutions. I'm tired of this idea that everything has to be cookie cutter, has to fit everyone's perception of "the way it should be." The truth is, a good system administrator should be left to their own devices. If you secure it, it performs as expected or better, never falling below acceptable levels, and you can handle a crisis, that is what matters. No one needs to know how you do it, just that you do it.

    Popular opinion of your methods is, unfortunately, important. When providing unmanaged VPS your clients are usually some type of provider themselves. Everyone in this industry that isn't a system administrator is an armchair administrator. They've got opinions, they don't even know why, but it effects their purchasing habits. Don't quote me on this, it's just my observation. I would like to see VPS startups continue breaking traditions and forging their own paths. I would absolutely love to hear that you've put 2,000 people on a machine and everyone has room to breathe. Would I trust it? I'm not certain I would, but I would be intrigued by it.

  • @Alex_LiquidHost said: I am quite sure that investing in the proper hardware to get a monster VPS node up and running would cost much, much more than actually buying a couple of lower-end servers that at the end will handle more customers with better performance.

    You would be quite wrong then. On some points.... You do make some very valid points at the same time. I have decided to make a change to my node builds for larger OpenVZ based VPS (512mb-2gb) that now use E5 based nodes with 12-16 disk arrays, but the smaller VPS have me concerned about the very topic, sure I can get more containers on a larger node like these, but at what point do bad things start happening just based on numbers of containers? You can even shrink the size of the node, to something with 16gb, a bit larger than off the shelf, but semi generic VPS node, and @damian sells a very popular 32mb VPS, well without even selling 50% of his memory, that is 256 containers on a node, that is a /24 of IP space, and then what sort of weirdness can one expect... that is my question. What clients think is not my worry, what the hardware does is.

    Hostigation High Resource Hosting - SolusVM OpenVZ/KVM VPS
  • As long as im happy/feel I get what I pay for, I couldn't care less :)

  • @jarland said: I welcome "different" solutions.

    Yes. Most will probably fail at some point, but lessons will be learned. We dont deal here only with numbers and math, it is also ppl behaviour. You can have 1k customers with idle machines or 100 pushing the limits. If they keep the same pattern it is still fine, but what you do when 100 of the 1k are starting to push the limits ? A good admin will be prepared from the moment of the sale for such an event, in practice, it is touch and go but in the long run, every host attracts a certain type of customer that in time forms the majority. The others leave on various issues, be it price, support, boredom... In my view diversity of offers and customers will make the host most resiliant to market changes, it will not be easy to cater for so many different views and needs, tho. Listen to the market and dont try to make the customers after your model. M

    Who's General Failure, and why is he reading my drive A: ?

    Thanked by 1Jar
  • ZenZen Member
    edited August 2012

    Put shortly, the risk of a major &%^!up increases with higher density. Not just with servers either..

    I would rather be on a lower end node with 100 users than a higher end node with 200+ users. That's unless I was with a big corporation that can handle it. I would never trust a LEB provider with such numbers.

  • @Damian obviously recently its shown you can have small nodes and they get taken offline due to bad decisions with datacentre / hardware.

    All of these decisions should be made as part of a larger business plan and should not be an after thought.

    HTTP Zoom - httpzoom.com
  • IMO, the biggest worry for me would be "what if someone abuses?" - especially for a smaller company where they (usually) can't afford to have someone monitoring 24/7.

    Looking for support, sysadmin, etc. work: PM
    Working on VPSM
  • jcalebjcaleb Moderator

    It's ok for me, If I choose a reputable company with good history, it knows what it's doing

    Twitter Bootstrap Themes for your software projects. I recommend Prometeus and Catalyst Host

  • As a customer my perspective is that if it's stable/reliable then it doesn't matter.

    As a supplier my perspective is that I'd probably rather spread my load out a bit so that a) I can move everyone off a single node safely incase it needs to go offline and b) so that if there are dramatic problems it isn't affecting as high a proportion of my customers compared to if everyone was on one.

    If 1 node goes down and you have 4 nodes with 100 users each then 25% of your customer base is affected. If 1 node goes down and you have 2 nodes with 200 users each then 50% of your customer base is affected. That is a pretty huge difference.

    Ransom IT | ɹǝpun uʍop sdʌ | vps down under | KVM in Sydney and Adelaide | OpenVZ in Adelaide
  • pcanpcan Member

    Actually, a popular trend on enterprise solutions is to replace many small nodes with a few bigger ones. You can save on software licensing costs (as example, Microsoft solutions are licensed per processor socket), but the real drive is that big servers are way, way more realiable than smaller ones. As example, recent Intel 4 socket processors can keep going even after a unrecoverable RAM error; the VPS that use this block of memory will be halted, memory marked as bad and other VPSs on the same node will continue working as usual. You will tipically benefit from dozens of disks with awesome IO speed, and you can do node maintenance whitout shutting down the guest VPSs. On entreprise it is not unusual to have hundreds of users depending on a single host server. The big server can also be cheaper on a multy-year investment scenario, and will bring a benefit on reliability that is diffcult to monetize, but very real. The problem is: this solution need a big upfront capital investment that will repay itself only after years, something no LEB provider will tipically accept.

  • @pcan said: Actually, a popular trend on enterprise solutions is to replace many small nodes with a few bigger ones. You can save on software licensing costs (as example, Microsoft solutions are licensed per processor socket), but the real drive is that big servers are way, way more realiable than smaller ones. As example, recent Intel 4 socket processors can keep going even after a unrecoverable RAM error; the VPS that use this block of memory will be halted, memory marked as bad and other VPSs on the same node will continue working as usual. You will tipically benefit from dozens of disks with awesome IO speed, and you can do node maintenance whitout shutting down the guest VPSs. On entreprise it is not unusual to have hundreds of users depending on a single host server. The big server can also be cheaper on a multy-year investment scenario, and will bring a benefit on reliability that is diffcult to monetize, but very real.

    This ! It also saves power and resources, lowers the cost and, after all, why are we doing virtualization ? it is a normal trend, I think. M

    Who's General Failure, and why is he reading my drive A: ?

  • My background is in large enterprise set ups. When the team I was in started looking into using VMWare for actual production servers back in 2007-2008. One of the largest DIY Stores in the UK, revealed that they had just consolidated all of their servers (if I remember correctly, it was 200+) onto just 4 physical servers.

    I’ve looked at a server + storage setup for offering VPS, cost was about £50,000 (about $80,000) a year ago. It’s doable, can even be profitable (payback possible in a year) but you wouldn’t be able to put the setup into an existing network. Your basically looking at a HPC cluster.

  • @bit: That's actually the pivot right there: in an enterprise setting, it's expected that enough money can be thrown at the problem to have acceptable performance. Due to low profit margins in the low-cost VPS market, densities need to be high; was mostly wondering when people thought too much was "too much" irregardless of hardware's capabilities.

    I am no longer affiliated with IPXcore.
  • These densities certainly concern me. Massive resource contention fights potentially. Depends on the management and policies to keep it running smooth. Disk IO in such a model would be my point of concern. Certainly getting into mass NIC bonding or 10GE interfaces.

    Massive impact in case of hardware failure.

    I don't think this matter should be apparent to end users though. A VPS is just a small sliced box within something. I

  • @pubcrawler said: Massive resource contention fights potentially

    Yeah, I think that is a problem and can be seen mostly in OVZ and KVM for some reason. Xen does not seem to develop a problem when there are big servers sliced in small pieces as long as the HW can hold it. OVZ starts abusing IRQs first then CPU usage explodes in KVM (density per node rising). YMMV, of course, I only saw a few particular cases. M

    Who's General Failure, and why is he reading my drive A: ?

  • DamianDamian Member
    edited November 2012

    @Maounique said: OVZ starts abusing IRQs

    Just a bit of info on this: enabling IRQ balancing (and making sure it's working automatically, and forcing it to work when it's not) helps out immensely with improving system interactivity 'feel'.

    I am no longer affiliated with IPXcore.
Sign In or Register to comment.