Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Dedicated slices: going to become common?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Dedicated slices: going to become common?

I did some reading on dedicated slices and only found Buyvm offering packages. It looks good for folks who need more memory over cores, and for those applications it appears to be quite a bit cheaper than typical KVM vps's.

So here's a general question given that I don't follow this stuff closely: think other providers like DO, Ramnode, OVH, etc. are going to be following suit and offer packages like this in the future? (Of course if there already are, please mention/link).

Comments

  • Is this a joke?

  • It's a BuyVM thing and a BuyVM term though. _Fran_kly the pricing is pretty hard to beat, unless Fran shares how he did it.

  • Its essentially same as any when its 1/4 or 1/2 of a core slice, but when its above that such as 1 fully dedicated core, you start getting a whole new kind of a customer group that is used to getting kicked for resource overusage. Now the chances they stay with you are high.

    When someone wants to do resource intensive stuff, everyone says "Buy a dedicated". Now theres a cheaper alternative

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    busbr said: It's a BuyVM thing and a BuyVM term though. _Fran_kly the pricing is pretty hard to beat, unless Fran shares how he did it.

    It shouldn't be that hard to imagine how it's done. E3s come with 4 cores, 8 threads. As far as I remember, Fran said they offer a dedicated thread per customer (more threads or less on different plans). Which makes a total of 8 users on the 1 dedicated core/thread plan, equaling to 120$ per month per E3 machine (not counting fees, etc.), which is doable, when done correctly.

    Thanked by 1busbr
  • stefemanstefeman Member
    edited December 2016

    You should note that fran buys and colocates the hardware so he doesn't have to make the profit back monthly to renew the servers. I imagine they could be running up to six month negative for paying the servers and power/colo bills and after that its pure profit for each server.

    Time is the key here I suppose. If youre reselling OVH and have to make back 200€/m for the server + profit, you cant compete in prices with someone who colocates.

  • SpartanHostSpartanHost Member, Host Rep

    @AlexBarakov said:

    busbr said: It's a BuyVM thing and a BuyVM term though. _Fran_kly the pricing is pretty hard to beat, unless Fran shares how he did it.

    It shouldn't be that hard to imagine how it's done. E3s come with 4 cores, 8 threads. As far as I remember, Fran said they offer a dedicated thread per customer (more threads or less on different plans). Which makes a total of 8 users on the 1 dedicated core/thread plan, equaling to 120$ per month per E3 machine (not counting fees, etc.), which is doable, when done correctly.

    I guess what people might think is that it isn't majorly profitable if you're only getting $120/m (excluding IPs and transaction fees) for a 32gb E3 + 4 x SSDs and a raid card but in high volume it probably works out pretty well.

  • @SpartanHost said:

    @AlexBarakov said:

    busbr said: It's a BuyVM thing and a BuyVM term though. _Fran_kly the pricing is pretty hard to beat, unless Fran shares how he did it.

    It shouldn't be that hard to imagine how it's done. E3s come with 4 cores, 8 threads. As far as I remember, Fran said they offer a dedicated thread per customer (more threads or less on different plans). Which makes a total of 8 users on the 1 dedicated core/thread plan, equaling to 120$ per month per E3 machine (not counting fees, etc.), which is doable, when done correctly.

    I guess what people might think is that it isn't majorly profitable if you're only getting $120/m (excluding IPs and transaction fees) for a 32gb E3 + 4 x SSDs and a raid card but in high volume it probably works out pretty well.

    I think Fran has said previously that it's 2 SSDs on the nodes and software RAID, if I recall correctly.

  • teamaccteamacc Member
    edited December 2016

    @SpartanHost said:

    @AlexBarakov said:

    busbr said: It's a BuyVM thing and a BuyVM term though. _Fran_kly the pricing is pretty hard to beat, unless Fran shares how he did it.

    It shouldn't be that hard to imagine how it's done. E3s come with 4 cores, 8 threads. As far as I remember, Fran said they offer a dedicated thread per customer (more threads or less on different plans). Which makes a total of 8 users on the 1 dedicated core/thread plan, equaling to 120$ per month per E3 machine (not counting fees, etc.), which is doable, when done correctly.

    I guess what people might think is that it isn't majorly profitable if you're only getting $120/m (excluding IPs and transaction fees) for a 32gb E3 + 4 x SSDs and a raid card but in high volume it probably works out pretty well.

    iirc fran said they're running raid1 on 2 ssds, so that cuts down the cost a bit already. Could be soft-raid too i suppose

  • jarjar Patron Provider, Top Host, Veteran
    edited December 2016

    I don't think it's profitable enough that everyone should put out a "me too" offering, personally. Profitable it surely is, but profit per 1U of rack space as compared to other products is a metric I would challenge.

    I don't see this being a growing trend, but I see it justifying its existence for Fran.

    Thanked by 2Francisco doghouch
  • mswillsen said: think other providers like DO, Ramnode, OVH, etc. are going to be following suit and offer packages like this in the future?

    That's basically what the OVH public cloud is, though it costs a lot more. Delimiter (who?) also has dedicated core VPS, though they are on slower machines, use HDD instead of SSD, etc.

    IMHO the smaller slices and plans of this sort are excellent deals, but the larger ones start getting into dedicated server territory pricewise, so you have to figure out what fills your requirements best.

    Thanked by 1mswillsen
  • AmitzAmitz Member
    edited December 2016

    Wasn't slicehost (now RackSpace) the originator of the word 'slice' for dedicated ressources? I would swear! I had some 'slices' with them, therefore I remember...

    Thanked by 2Clouvider deadbeef
  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited December 2016

    It is.

    It works because we have a large warchest for these kinds of things. Break even is still over 6 months which is more than I'm normally comfortable with. With that being said though, we aren't wanting to be dealing with OVZ anymore. I've been trying to get a plan together which would just start asking people to migrate and work to get those nodes out of there.

    Running hardware raid on SSD's is dumb unless you're really needing to abuse the write cache to make DD's look even better. Basic RAID0/1/10 on MDADM is extremely low CPU usage, around a couple %, even less on a high Mhz CPU.

    Personally, on our OVZ nodes we bought Adaptec 7805's and dropped 8 SSD's on each. We ended up having to disable the write cache on it just because during IO heavy work (benchmarks at the minimum) we were around 25% of the performance we should be getting.

    It means we'd have a real issue with the 128MB's, but I'd likely do a one off upgrade to those plans (bump them to 256MB, SSD base storage, etc) and just drop them into a KVM. I haven't decided on this one though so please don't make a ticket asking for this to be put in place :P

    From a development prospective, doing work on KVM is just a lot easier than dealing with OpenVZ's quirks. When we launched our anycast product the OpenVZ side required probably 1000 lines of code for all of its quirks and everything else to make it work, where as KVM took like 5 - 10 lines since it was mostly just reorganizing some of the IP locks.

    OpenVZ holds me back from a lot of things and overly complicates things.

    Francisco

  • I was wondering how Virmach does it:
    $2/mo with dedicated core: "full virtualization" 2+ GHz & 256 MB:
    https://virmach.com/cheap-kvm-linux-vps-windows-vps/
    (And that's before any coupons.)

  • AmitzAmitz Member
    edited January 2017

    go99 said: I was wondering how Virmach does it:

  • I've thought about that these days (also based on buyvm's offers). In the end I thinks it's an unattractive spot. Reason: Either one needs "some server capacity", say to run a couple of web sites, in which case any not lousy VPS will do fine, or one really needs serious power; then a dedi is the right answer.

    Moreover threads are not the unit that is attractive. That would be cores. Having x threads of 8 or whatever threads of a cpu doesn't mean much. It does not mean one has a fixed share of the cpu. It can mean that but usually it doesn't and I dare to say that getting a normal kvm or xen VPS from a decent provider who doesn't brutally oversell and who weeds out abusers will be quite as good as dedicated "slice".

    That said, I'm expecting buyvm to be successful with their dedicated slice thing. But the reasons are in the marketing field and, uhm, the fact that > 95% of customers are quite clueless and "have your own fixed dedicated share" sounds very good.

    Thanked by 1apidevlab
  • bsdguy said: Either one needs "some server capacity", say to run a couple of web sites, in which case any not lousy VPS will do fine, or one really needs serious power;

    There's a situation where you run a few web sites but occasionally have to do something intense like transcode videos for the site. Or in my case I once ran a search engine, fairly low cpu load except occasionally the index had to be rebuilt from scratch, which took about 12 hours that was all cpu. Being able to do that without worrying about flak from the host is nice.

  • @Francisco said:
    It is.

    It works because we have a large warchest for these kinds of things. Break even is still over 6 months which is more than I'm normally comfortable with. With that being said though, we aren't wanting to be dealing with OVZ anymore. I've been trying to get a plan together which would just start asking people to migrate and work to get those nodes out of there.

    Running hardware raid on SSD's is dumb unless you're really needing to abuse the write cache to make DD's look even better. Basic RAID0/1/10 on MDADM is extremely low CPU usage, around a couple %, even less on a high Mhz CPU.

    Personally, on our OVZ nodes we bought Adaptec 7805's and dropped 8 SSD's on each. We ended up having to disable the write cache on it just because during IO heavy work (benchmarks at the minimum) we were around 25% of the performance we should be getting.

    It means we'd have a real issue with the 128MB's, but I'd likely do a one off upgrade to those plans (bump them to 256MB, SSD base storage, etc) and just drop them into a KVM. I haven't decided on this one though so please don't make a ticket asking for this to be put in place :P

    From a development prospective, doing work on KVM is just a lot easier than dealing with OpenVZ's quirks. When we launched our anycast product the OpenVZ side required probably 1000 lines of code for all of its quirks and everything else to make it work, where as KVM took like 5 - 10 lines since it was mostly just reorganizing some of the IP locks.

    OpenVZ holds me back from a lot of things and overly complicates things.

    Francisco

    It sounds like you're going to full visualization for everything, had you given any thought to replacing OVZ with LXC (or similar)?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @rpcope said:
    It sounds like you're going to full visualization for everything, had you given any thought to replacing OVZ with LXC (or similar)?

    Sure, except LXC is very lacking. There's no sub quota support, there's no standard way of handling disk space, there's plenty of concerns about breakouts and things like that. The thing with LXC is it was never designed to be multi tenant so while security is a big thing, keeping users within their own areas is priority #1 in OVZ.

    We get complaints about ipsec all the time, we get complaints because people want to do funny things with docker that OVZ doesn't support, we have people wanting encrypted filesystems, we have people running large scale reverse proxies that hit conntract issues on OVZ and there's no easy way around it short of bumping limits and adjusting timeouts. While OpenVZ is perfectly fine for many users, for us, we're just tired of it. OVZ accounts for 1/3rd+ of our tickets, with BuyShared the majority of the rest. Our KVM tickets consist almost entirely of people just needing help with OS installs.

    I just dislike the whole shared kernel idea. We've had our issues over the years where some unpatched OVZ bug crashes the whole node. We had an issue a long time ago back in the 2.6.18 days where OpenVPN could crash nodes if it was busy enough due to it handling packets incorrectly. It took a lot of work with one of their developers and a handful of splices at the time before we had it addressed.

    Francisco

  • WSSWSS Member

    Francisco said: I just dislike the whole shared kernel idea. We've had our issues over the years where some unpatched OVZ bug crashes the whole node.

    I am so, so glad I didn't have to deal with this when I handled DC. The biggest goofballs I had were those who used killall on non-Linux hosts. :D

  • WilliamWilliam Member
    edited January 2017

    SpartanHost said: I guess what people might think is that it isn't majorly profitable if you're only getting $120/m (excluding IPs and transaction fees) for a 32gb E3 + 4 x SSDs and a raid card but in high volume it probably works out pretty well.

    hm? On a v1 or v2 and with CA power pricing i make even on refurb hardware like this profit on 75$+, the 8-16 IPs are then not relevant anymore...

    Seriously, what do you pople think HW costs these days?

    I buy HP DL360s (G6, Dual ?56xx) and Quanta 2011 Systems (Dual E5 v1/v2) left and right on ebay for barely 250$ each - and these BEAT a 1270 in the default config with JUST a bit more power usage (X56xx)...

    Dual E5 with CPUs at 100$, plus some costs for RAM and mounts....

    http://www.natex.us/Quanta-QSSC-2ML-Dual-LGA2011-Sockets-16-DIMM-1u-R-p/spd-6.htm

    Francisco said: Running hardware raid on SSD's is dumb unless you're really needing to abuse the write cache to make DD's look even better.

    No, running SATA crap in HW RAID is useless, i see you had no NVMe hardware yet to test... PCIe RAID via backplane (2.5" drives/SFF-8639) is WAY different to anything SATA based you have EVER seen, i'm talking about 20GB/s+ R/W (on a bad dea, peak 25GB/s) in a R10 config of 8 (!) drives...

  • bsdguybsdguy Member
    edited January 2017

    @Francisco

    I agree. openvz is OK in a closed and well known (read: corporate) environment. For multitenant VPS hosting, particularly in a hostile environment it's a security nightmare and a trouble generator. The linux kernel is a monolith and so the trouble goes through all layers (e.g. drivers), any vps can crash your node, etc.
    I'm cancelling my one and only ovz vps these days (not renewing it). And that was my 2nd attempt; not that I was expecting better but I wanted to really try it.

    @William

    Pardon me but what you say makes not much sense. Raid addresses a particular problem and makes sense (or not) quite independent of the drive technology. Raid 0 is in a way abused nowadays by speed freaks; it was originally meant to get faster to the data on then slooow drives. Raid 1 and Raid 5 were and are for resilience and availability. Caches were soon added to make the products more attractive by offering an extra benefit. Often they make even sense.
    Francisco put it somewhat hard but he isn't far off. Caching SSDs rarely makes sense.

    As for sata vs sas, that's to quite some degree marketing. Moreover raid (preferably 6 with cheap drives) can nicely balance out the lower drive life expectancy. Just have a look at the giants; there's a reason they use the cheapest drives available (Which might be quite different for the single SME server).

    Finally let's keep the following in mind: time to paint is mostly defined by packet travel time. Even a lousy php web page with a lousy DB on lousy disks will be more then 10 times faster than the packet travel time. In the end you are fighting to cut 1 or 2 ms out of 50 ms.

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited January 2017

    People want single thread performance so the e5s aren't a great fit. The 3.5ghz e5s are going to eat a lot more power than the e3s as well as cost.

    As for the ssds, great, go back 3 years and get me a raid card and a bunch of NVMEs. Again, my tests were with the 7805s and such as I said and they just don't perform great. Big waste of cash in our books.

    I could've spent a lot of cash for the e-peen factor but it doesn't help the product line. I think what we are offering are great priced, great performance, and we have tons of features.

    Edit - somehow my autocorrect put Jew instead of eat, sorry.

    Francisco

    Thanked by 2GCat [Deleted User]
  • WilliamWilliam Member
    edited January 2017

    bsdguy said: As for sata vs sas, that's to quite some degree marketing. Moreover raid (preferably 6 with cheap drives) can nicely balance out the lower drive life expectancy. Just have a look at the giants; there's a reason they use the cheapest drives available (Which might be quite different for the single SME server).

    SATA and SAS are the same in core anyway, NVMe is not based on either. Main thing is that SATA is the worst thing you can imagine for SSDs and SAS comes not far after, even in HDSAS/12Gb (which protocol wise seems to have not much improvements), as far that Intel and others went the NVMe cabled (so non M.2) route to replace it...

    The "giants" you assume are Backblaze and similar? Might seem so to you, but external AWS, Azure and various gov agencies buy FAR more drives than the one click hosters and backup providers together... and these buy the enterprise things, if not directly by HP/SM/Quanta.. which do not sell you non-enterprise HW in most cases.

    The point is mainly that HW RAID on SSDs has still a use and will continue to have one post-SATA/SAS as well, if not more than now (and not being replaced by CEPH and similar either). We'll also see improvements in SSD caching (hopefully) and parity performance. Then there is liability things in financial and similar which don't rely on opensource/kernel based RAID and so on...

    Francisco said: People want single thread performance so the e5s aren't a great fit.

    Customers take what's available if the price point is right, if you replace your own offer the choice is... not really there?

    2690s v1 are not too extreme off at 2.9Ghz, 2 of them bench at 21k - E3-1270 is somewhere around 9-10k depending on generation, with 32GB RAM limit. TDP of 2690 v1 is 130W which means 260W total vs. 80W for an E3 thus 160W total for 2.

    Adds 100W/1A in power cost, sure, but also saves 1U/second server and avoids the expensive UDIMMs.

    The chassis, if eg buying used Quanta hardware for 2011 and SM for E3, costs about the same - the 2690s cost about 80-100$ each, a 1270 v3 is 120-250$ (excluding a weird seller in China with 79$ ones) so not much diff here either.

    No RAID, the Quanta adds also 10G connex (but only 1 port) which eg. a DL320/base SM w/ E3 lacks. Double the RAM (so 64GB) instead of 32GB to sell same amount of CPU power.

    From what i mostly see you get the 1A added cost (somewhere at 20$ in US?) but reduce setup complexity (1U less, which is probably also around 20$?) while maintaining an upgrade path for RAM (32GB.... 32GB until Skylake...), gaining more PCIe lanes for eventual additions (eg. NVMe caching SSDs..)...

    Now yea, if you buy new hardware (which currently seems rather pointless, just buy 2 used ones) E3 makes more sense over E5, solely as no seller will likely sell you a past generation setup.

    Francisco said: Edit - somehow my autocorrect put Jew instead of eat, sorry.

    ahhh that actually would have been more fun, and as "chew" not even incorrect :p

  • offtopic-dedicated kvm in east coast?

    Thanked by 3Francisco GCat Mathias
  • raindog308raindog308 Administrator, Veteran

    Francisco said: It works because we have a large warchest

    image

  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited January 2017

    @smicroz said:
    offtopic-dedicated kvm in east coast?

    Finally getting somewhere with it. The racks got delayed damn well near a month and only just got provisioned a week or so ago. I'm trying to light a fire under their asses so I can get my badges and fly on over.

    Francisco

    Thanked by 2eva2000 GCat
  • @William said:

    Dual E5 with CPUs at 100$, plus some costs for RAM and mounts....

    http://www.natex.us/Quanta-QSSC-2ML-Dual-LGA2011-Sockets-16-DIMM-1u-R-p/spd-6.htm

    I don't see where you get CPUs and the chasis for $100 in that link. The cheapest was adding in $76 so making it closer to $200. Do they go more on sale than what they are now?

  • raindog308raindog308 Administrator, Veteran
    edited January 2017

    Naw, slices are a passing trend, like vpses.

    Now droplets on the other hand...those are here to stay.

    EDIT: sorry, posted from wrong account.

  • @raindog308, your $7 will land into your PayPal account tonight, when I finish my Red Bull.

    -- not Jarland.

    Thanked by 1raindog308
Sign In or Register to comment.