Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Leaseweb VPS SSD throttling @ 1000mbps vs unthrottled @ 100mbps
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Leaseweb VPS SSD throttling @ 1000mbps vs unthrottled @ 100mbps

seizureseizure Member
edited November 2017 in Help

just wondering if anyone can give me some insight:

i understand Leaseweb VPS throttles their SSD I/O (about 1/4 performance compared to most other providers: 80 vs 300)...

how much does this (if any) affect the use of their 1,000mbps port?

i read somewhere that making full use of LW's 1,000 mbps connection is not possible when the SSD is throttled...

if so, am i better off with another provider that does not throttle SSD speed but only come with a 100mbps connection? (i'm comparing it to OVH's SSD VPS @ 100mbps port speed and will not be using it for anything that is disk intensive)

Comments

  • Do you really need to use the whole Gbit port 100% to writing or reading everything do disk?

    I my opinion for webservers iops matters more than 'speed'.

    Anyway

    100Mbs = 12MB/s so your 80MB/s limit is still far. (And RAM acts as cache, etc)

  • seizureseizure Member
    edited November 2017

    i'll be using the VPS as "disposable" mobile app gateways to our database/backend (multiple dedicated servers each with 1000mbps port speed).

    disposable = when the traffic is used up, they will go offline and i will provision more and connect the new ones online (load balancer).

    i'm still new to this, but have been told 100mbps is fast enough - OVH has lower traffic but good discounts available over the next few days.

    i just wonder if the LW throttling will be a limitation in my case (LW @ 1,000 mbps, not 100 mbps).

  • LW doesn't have VPS with 1000mbps they are hard caped at 500 mbps

  • seizure said: i just wonder if the LW throttling will be a limitation in my case (LW @ 1,000 mbps, not 100 mbps).

    I don't think it would be but knowing not much of your setup it's hard to tell. Try, bench and see for yourself is what I would recommend.

  • seizureseizure Member
    edited November 2017

    : o

    you're right.

    good to know! thanks!

  • You want as much as possible everything loaded to ram anyway. With that said if your not averaging 80-120MB/s when testing I/O i would consider alternative hosts as that's a clear sign of massive v crowding on a node. With that said it'll always be romped on after specials like black Friday when everyone is testing.

  • With 6TB, you might as well be on 100Mbit; you can still burn through it in under a week.

  • niknik Member, Host Rep

    They don't even offer pure SSDs, it's SAS with SSD cache.

  • @nik Where were you yesterday?! Where's our fucking panel!?

    Thanked by 1Rhys
  • ru_tldru_tld Member, Host Rep

    @nik said:
    They don't even offer pure SSDs, it's SAS with SSD cache.

    You're wrong. They are using NetAPP all flash strorage arrays for their cloud platform.

    @seizure, you need to be in care of IOPS, not read/write performance in one thread.

  • niknik Member, Host Rep

    @WSS said:
    @nik Where were you yesterday?! Where's our fucking panel!?

    Overslept, maybe on purpose ;-) Panel is ready, waiting for the last servers and than some more testing, ETA is this year for sure! FINALLY.

    @ru_tld said:

    @nik said:
    They don't even offer pure SSDs, it's SAS with SSD cache.

    You're wrong. They are using NetAPP all flash strorage arrays for their cloud platform.

    @seizure, you need to be in care of IOPS, not read/write performance in one thread.

    On their website it's an SAS array with SSD cache, that's also something the support confirmed. I am not sure what counts as "cloud platform" but the thread is about their VPS SSD line.

  • chxchx Member
    edited November 2017

    @sureiam said:
    You want as much as possible everything loaded to ram anyway.

    This guy. The church of Jim "ram is the new disk" Gray whose prophet is Gary Bernhardt

    Consulting service: you bring your big data problems to me, I say "your data set fits in RAM", you pay me $10,000 for saving you $500,000.

    RAM, RAM, RAM! I have been a devout member of this church for a long time now -- we started using 144GB servers in 2010. Recently, I was not happy when Heymman ran out of their 1TB RAM servers, those puppies were cheap as heck. Does anyone know where to get 0.5-1TB RAM servers on the cheap? (The Hetzner EPYC is 433 EUR for a measly half tera, Heymman only asked 500 USD for a terabyte.)

  • seizureseizure Member
    edited November 2017

    i don't get proportionately more traffic with more ram... so i'm inclined to go with less ram + more VPS.

    OVH 2gb ram: 2tb traffic

    OVH 4gb ram: 3tb traffic

    OVH 8gb ram: 4tb traffic

    LW 2gb ram: 6tb traffic

    LW 4gb ram: 8tb traffic

    LW 8gb ram: 10tb traffic

    i'm using these VPS as mobile app gateways (traffic)... but i will take advice from @chx and @sureiam and go for something with more ram where possible. but getting a server with 500gb or 1tb ram is not possible at this point (for us).

    @wss i will burn through the traffic, but that's not the point (as explained above, i will take them offline and just connect a new VPS) - i would prefer to go for something that gives multiple mobile devices best performance.

    thus my intent was get the answer to: is the "bottle neck" of throttling disk speed + 500mbps VS no throttling + 100mbps an issue or not an issue to consider?

    @sureiam / @ru_tld FWIW, here are the results of fio:

    LW 4gb ram SSD VPS (Asia)

    read : io=3072.0MB, bw=9283.9KB/s, iops=2320 , runt=338841msec

    write: io=1024.0MB, bw=3094.7KB/s, iops=773 , runt=338841msec

    cpu : usr=0.56%, sys=2.69%, ctx=358261, majf=0, minf=4

    Run status group 0 (all jobs):

    READ: io=3072.0MB, aggrb=9283KB/s, minb=9283KB/s, maxb=9283KB/s, mint=338841msec, maxt=338841msec

    WRITE: io=1024.0MB, aggrb=3094KB/s, minb=3094KB/s, maxb=3094KB/s, mint=338841msec, maxt=338841msec

    Disk stats (read/write):

    vda: ios=784399/262055, merge=0/37, ticks=16218708/5759008, in_queue=21990692, util=100.00%

    OVH 2gb ram SSD VPS (Asia)

    read : io=3071.5MB, bw=14637KB/s, iops=3659 , runt=214882msec

    write: io=1024.6MB, bw=4882.6KB/s, iops=1220 , runt=214882msec

    cpu : usr=1.70%, sys=8.35%, ctx=501324, majf=0, minf=4

    Run status group 0 (all jobs):

    READ: io=3071.5MB, aggrb=14636KB/s, minb=14636KB/s, maxb=14636KB/s, mint=214882msec, maxt=214882msec

    WRITE: io=1024.6MB, aggrb=4882KB/s, minb=4882KB/s, maxb=4882KB/s, mint=214882msec, maxt=214882msec

    Disk stats (read/write):

    sda: ios=786057/262308, merge=246/154, ticks=12594024/1689760, in_queue=14283840, util=100.00%

  • @ru_tld said

    This consumer day, you do not do any promotions?

  • sinsin Member
    edited November 2017

    @user54321 said:
    LW doesn't have VPS with 1000mbps they are hard caped at 500 mbps

    I have no problem hitting 1Gbps on my Leaseweb VPSes? If you have a VPS that is only getting 500mbps hit up their support and they'll fix it. I had one that was only getting 230mbps and they fixed it for me the next morning.

    I mean unless they changed something in the past month on newly deployed ones but their website still shows 1gbps full duplex.

  • @nik said:

    On their website it's an SAS array with SSD cache, that's also something the support confirmed. I am not sure what counts as "cloud platform" but the thread is about their VPS SSD line.

    It's NetApp storage for everything :-) Even VPS's - what the VPS page says is wrong anyway.

    You should get more up to date information (that isn't years old) by contacting their support and ask to get passed to the Cloud department that actually knows how the platform works - what L1 support tells you (if this is recent) is just because they're wrong.

    The "throttle" is put in place to prevent IO abusers most likely.

    The good thing is, the speeds that people get are good enough for most people, and for those they're not good enough for - maybe they should just use another provider that fits their needs better.

  • caracalcaracal Member
    edited November 2017

    I run an "S" sized VPS in LW Singapore and hit 1gbps whenever needed even on their throttled I/O. While the I/O is throttled, I get the same performance consistently 24/7 and supposed High-availability on their platform.. The predictability and reliability is their strength here instead of top notch performance.

    Zerpy said: what L1 support tells you (if this is recent) is just because they're wrong.

    FWIW, I have never been able to get straight answers about the infrastructure based on my ticketing at their Singapore's support.

  • @caracal said:
    I run an "S" sized VPS in LW Singapore and hit 1gbps whenever needed even on their throttled I/O. While the I/O is throttled, I get the same performance consistently 24/7 and supposed High-availability on their platform.. The predictability and reliability is their strength here instead of top notch performance.

    It's consistent in terms of IO because it's shared storage and because they do throttling on IO - it greatly limits the risk of causing any IO issues.

    Zerpy said: what L1 support tells you (if this is recent) is just because they're wrong.

    FWIW, I have never been able to get straight answers about the infrastructure based on my ticketing at their Singapore's support.

    Might be - I have accounts in almost all entities, I ask for the right teams when I need things fixed - meaning if I create a ticket regarding something about VPS or Cloud in Singapore, my ticket will end up with the team responsible for the platform, because I explicitly ask for it - and they happen to be in Amsterdam area.

  • caracal said: The predictability and reliability is their strength here instead of top notch performance.

    That's the point. Once you know your needs @seizure you can decide wisely where to host.

  • ru_tldru_tld Member, Host Rep

    @Jones said:
    @ru_tld said

    This consumer day, you do not do any promotions?

    You can get 10% discount on VPS service with promocode LowEndTalk

  • sureiamsureiam Member
    edited November 2017

    It's one thing to get a server with more ram. Its another to optimize everything to use more ram. Mysql and php-fpm for example must have their settings changed to use more ram otherwise their default is to be easy on the ram. Mysql in particular needs caching ramped up a lot plus set to see cache to disk and reload it after service restart and php-fpm needs static workers assigned.

    You don't need to put everything to ram. Obviously the more the better but Mysql and php (if your use case uses php) will net you massive performance gains.

  • @sin said:

    @user54321 said:
    LW doesn't have VPS with 1000mbps they are hard caped at 500 mbps

    I have no problem hitting 1Gbps on my Leaseweb VPSes? If you have a VPS that is only getting 500mbps hit up their support and they'll fix it. I had one that was only getting 230mbps and they fixed it for me the next morning.

    Incomming i get 950 MBit/s so no complain there, outgoing is stable at 500 MBit/s (tested with iperf to multiple servers) but i don't need more than 1 MBit/s for what i do on that VPS so i really don't care. If it is only my vps, that is nice for everybody else.

  • @sureiam said:
    Mysql in particular needs caching ramped up a lot plus set to see cache to disk and reload it after service restart and php-fpm needs static workers assigned.

    Really depends on which caching you're talking about - having a super high query caching can actually result in worse performance because of how the query caching works, I've seen people using so high a query cache that when the system finally flushed queries from the cache, the whole system would lock up until the cache got cleared - because clearing the cache would cause a lock on the query cache itself, thus preventing inserts into it - if a query tries to insert when there's a lock, it will happily sit and wait - and that can easily queue up.

    So no - actually not, it's not about making MySQL use as much memory as possible, you should make it use the optimal amount of memory for your setup and your data.

    Analyze, test, analyze, test, repeat.

  • @Zerpy said:

    @sureiam said:
    Mysql in particular needs caching ramped up a lot plus set to see cache to disk and reload it after service restart and php-fpm needs static workers assigned.

    Really depends on which caching you're talking about - having a super high query caching can actually result in worse performance because of how the query caching works, I've seen people using so high a query cache that when the system finally flushed queries from the cache, the whole system would lock up until the cache got cleared - because clearing the cache would cause a lock on the query cache itself, thus preventing inserts into it - if a query tries to insert when there's a lock, it will happily sit and wait - and that can easily queue up.

    So no - actually not, it's not about making MySQL use as much memory as possible, you should make it use the optimal amount of memory for your setup and your data.

    Analyze, test, analyze, test, repeat.

    Good point. Testing and analyzing are critical to any setup. This is no different

Sign In or Register to comment.