Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Advertise on LowEndTalk.com
Fixing Network Speed in KVM
New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

Fixing Network Speed in KVM

ironhideironhide Member
edited April 2014 in Tutorials

If you're using a KVM and not getting appropriate network speed (like 100Mbps when you're supposed to get 1Gbps) - you may try the following

open /etc/sysctl.conf in your server and append the following lines

net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_rmem=4096 87380 16777216                                          
net.ipv4.tcp_wmem=4096 65536 16777216

Now in terminal run the following command

sysctl -p

Now check network speed again. Here is how it went for me

#Stock Options

Download speed from CacheFly: 8.29MB/s 
Download speed from Coloat, Atlanta GA: 2.31MB/s 
Download speed from Softlayer, Dallas, TX: 3.45MB/s 

After sysctl.conf edit

Download speed from CacheFly: 51.2MB/s 
Download speed from Coloat, Atlanta GA: 19.8MB/s 
Download speed from Softlayer, Dallas, TX: 53.0MB/s 
Download speed from Linode, Tokyo, JP: 10.6MB/s 
Download speed from i3d.net, Rotterdam, NL: 5.12MB/s
Download speed from Leaseweb, Haarlem, NL: 14.2MB/s 
Download speed from Softlayer, Singapore: 8.87MB/s 
Download speed from Softlayer, Seattle, WA: 36.7MB/s 
Download speed from Softlayer, San Jose, CA: 33.6MB/s 
Download speed from Softlayer, Washington, DC: 76.5MB/s

Comments

  • Being a support guy, this is the one thing I have to continuously tell people to do. Thanks @ironhide :)

    Thanked by 1ironhide

    This signature wasted 121 bytes of your data allocation.

    https://nixstats.com/report/56b53d6465689e44598b4567

  • Add some note/tip for 100/100 as well. The values of this is tricky since its a 'kernel tweak for networking (?)'

  • Does this change the ram amount which is useable for the network?

    OnePoundWebHosting.co.uk | UK XEN VPS from £2 | See their special offers starting from 12£/year here

  • perennateperennate Member, Provider
    edited April 2014

    trexos said: Does this change the ram amount which is useable for the network?

    According to http://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php it adjusts the maximum and default send/receive buffer sizes for TCP connections. This means TCP algorithm is allowed to use a larger window, and also will start off downloading faster due to higher default. It uses a bit more memory (not noticeable to the user though, TCP doesn't need much memory).

  • Any same tips for Windows OS ?

  • I think, it's taken from https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=56
    I saw this tips many years only at ramnode.

  • GaNiGaNi Member
    edited April 2014

    @neqste said:
    I think, it's taken from https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=56
    I saw this tips many years only at ramnode.

    meh https://www.google.com/search?q=net.core.rmem_max=16777216, some date to 2007

    @rostin said:
    Any same tips for Windows OS ?

    having the right drivers installed, you don't need any other tuning.

  • ironhideironhide Member
    edited April 2014

    neqste said: I think, it's taken from https://clientarea.ramnode.com/knowledgebase.php?action=displayarticle&id=56 I saw this tips many years only at ramnode.

    That's silly. It's not a "Super Secret" or "Proprietary" stuff that is available only in a specific site. And just didn't you see it anywhere else doesn't mean that it is not available elsewhere. It's a common solution to problems like this.

    Even just searching this line "net.ipv4.tcp_rmem=4096 87380 16777216" brings up 138,000 results in google. Time to start using Google, eh?

  • georgegeorge Member
    edited May 2014

    Does this trick also applies to XEN based VPS's?

  • trexostrexos Member

    @george said:
    Does this trick also applies to XEN based VPS's?

    Yew, works well under Xen VPS.

    Thanked by 2george nexusrain

    OnePoundWebHosting.co.uk | UK XEN VPS from £2 | See their special offers starting from 12£/year here

  • All OSes benefit from it, the ones tuned to lower memory the most, i.e. debian based. Virtualization is not important, obviously, I mean virtualization, not containers using host kernel.
    And this is in our FAQ too. Just we hid it well :)

    Thanked by 1george

    Extremist conservative user, I wish to preserve human and civil rights, free speech, freedom of the press and worship, rule of law, democracy, peace and prosperity, social mobility, etc. Now you can draw your guns.

  • Thanks for this! Worked perfect on BlueVM KVM2!

  • trexostrexos Member

    works just fine with vmware :)

    OnePoundWebHosting.co.uk | UK XEN VPS from £2 | See their special offers starting from 12£/year here

  • perennateperennate Member, Provider

    trexos said: works just fine with vmware :)

    Works on OS without virtualization too.

  • sc754sc754 Member

    Before making this change on my kimsufi atom


    [[email protected]$ ~]# ./bench.sh
    CPU model : Intel(R) Atom(TM) CPU N2800 @ 1.86GHz
    Number of cores : 4
    CPU frequency : 1862.000 MHz
    Total amount of ram : 1978 MB
    Total amount of swap : 2046 MB
    System uptime : 1 day, 5 min,
    Download speed from CacheFly: 11.3MB/s
    Download speed from Coloat, Atlanta GA: 7.00MB/s
    Download speed from Softlayer, Dallas, TX: 3.70MB/s
    Download speed from Linode, Tokyo, JP: 3.32MB/s
    Download speed from i3d.net, Rotterdam, NL: 7.77MB/s
    Download speed from Leaseweb, Haarlem, NL: 6.33MB/s
    Download speed from Softlayer, Seattle, WA: 3.85MB/s
    Download speed from Softlayer, San Jose, CA: 5.53MB/s
    Download speed from Softlayer, Washington, DC: 6.61MB/s
    I/O speed : 90.4 MB/s

    After making the change


    [[email protected]$ ~]# ./bench.sh
    CPU model : Intel(R) Atom(TM) CPU N2800 @ 1.86GHz
    Number of cores : 4
    CPU frequency : 1862.000 MHz
    Total amount of ram : 1978 MB
    Total amount of swap : 2046 MB
    System uptime : 1 day, 8 min,
    Download speed from CacheFly: 11.2MB/s
    Download speed from Coloat, Atlanta GA: 4.23MB/s
    Download speed from Softlayer, Dallas, TX: 8.15MB/s
    Download speed from Linode, Tokyo, JP: 3.00MB/s
    Download speed from i3d.net, Rotterdam, NL: 10.9MB/s
    Download speed from Leaseweb, Haarlem, NL: 7.46MB/s
    Download speed from Softlayer, Seattle, WA: 4.05MB/s
    Download speed from Softlayer, San Jose, CA: 8.57MB/s
    Download speed from Softlayer, Washington, DC: 6.56MB/s
    I/O speed : 86.0 MB/s

    So a little inconclusive, but maybe better

  • FredQcFredQc Member
    edited June 2014

    OpenVZ:

    error: "Operation not permitted" setting key "net.core.rmem_max"
    error: "Operation not permitted" setting key "net.core.wmem_max"
    error: "Operation not permitted" setting key "net.ipv4.tcp_rmem"
    error: "Operation not permitted" setting key "net.ipv4.tcp_wmem"
    

    It's sad because I thought I could do better with this:

    CPU model :  Intel(R) Pentium(R) D CPU 3.00GHz
    Number of cores : 2
    CPU frequency :  750.089 MHz
    Total amount of ram : 1024 MB
    Total amount of swap : 0 MB
    System uptime :   68 days, 20:45,       
    Download speed from CacheFly: 6.33MB/s 
    Download speed from Coloat, Atlanta GA: 2.84MB/s 
    Download speed from Softlayer, Dallas, TX: 4.54MB/s 
    Download speed from Linode, Tokyo, JP: 2.34MB/s 
    Download speed from i3d.net, Rotterdam, NL: 2.26MB/s
    Download speed from Leaseweb, Haarlem, NL: 3.33MB/s 
    Download speed from Softlayer, Singapore: 1.92MB/s 
    Download speed from Softlayer, Seattle, WA: 9.34MB/s 
    Download speed from Softlayer, San Jose, CA: 6.37MB/s 
    Download speed from Softlayer, Washington, DC: 3.62MB/s 
    I/O speed :  58.0 MB/s
    
  • blackblack Member

    Guys, there are for KVM containers. You're not going to see a big difference on a host OS (no virtualization) and it's not going to work on OpenVZ.

  • Hi

    I know this is an old thread but I just try it on a KVM VPS running Debian 9 and network speed improved! So I guess this is still valid for Debian 9, Ubuntu 18.04 and CentOS 7?

    Are you guyis using this sysctl.conf "optimization"?

  • @nqservices said:
    Hi

    I know this is an old thread but I just try it on a KVM VPS running Debian 9 and network speed improved! So I guess this is still valid for Debian 9, Ubuntu 18.04 and CentOS 7?

    Are you guyis using this sysctl.conf "optimization"?

    no. if you have a proper host you don't need to. If you speed is less than advertised; I would try a different kernel than this hack.

  • @smile said:

    @nqservices said:
    Hi

    I know this is an old thread but I just try it on a KVM VPS running Debian 9 and network speed improved! So I guess this is still valid for Debian 9, Ubuntu 18.04 and CentOS 7?

    Are you guyis using this sysctl.conf "optimization"?

    no. if you have a proper host you don't need to. If you speed is less than advertised; I would try a different kernel than this hack.

    This isn't a hack. Linux was always expected to be tuned like this to the application needs (same with ANY OS) people just stopped tweaking because there's trade offs and requires testing. So stick with defaults.

    It's best to find blogs posted by people who worked on high client capacity servers and high throughput servers for their experience and recommendations. People generally search this out when researching their existing bottlenecks with default settings (ie, gigabit and 10Gb links). They will point out the most retarded default settings that should be changed for almost everyone.

    I stopped doing sysctl tuning years ago when I stopped using iptables scripts and just use firewalld.

    Nowadays, just enabling BBR on a supported kernel is the easiest return on effort.

    Thanked by 1cybertech
  • @TimboJones said:
    Nowadays, just enabling BBR on a supported kernel is the easiest return on effort.

    Agreed.

    ˙ɹǝuzʇǝɥ

  • @TimboJones said:
    Nowadays, just enabling BBR on a supported kernel is the easiest return on effort.

    @eol said:
    Agreed.

    Google BBR seems interesting. Never try it before. Can anyone recommend a link where I can find a good tutorial on how to implement it on Debian 9 or Ubuntu 18?

    Thanks

  • eoleol Member
    edited December 2018

    You need to modify the kernel config and compile it.
    Just look for "compiling kernel on debian".

    EDIT:
    From kernel 4.9 and up.

    ˙ɹǝuzʇǝɥ

  • @nqservices said:
    Google BBR seems interesting. Never try it before. Can anyone recommend a link where I can find a good tutorial on how to implement it on Debian 9 or Ubuntu 18?

    Thanks

    for a vps with kernel 4.9 you generally just need to edit add sudo nano /etc/sysctl.conf

    net.core.default_qdisc=fq
    net.ipv4.tcp_congestion_control=bbr

    save and run

    sudo sysctl -p

    check if bbr is loaded

    lsmod | grep bbr

    Around 30 seconds

  • but "hack" and bbr in combination are counterproductive?

  • cybertechcybertech Member
    edited December 2018

    thanks for tip on bbr.

    after updating kernel, low memory usage and faster network speed.

    Remember the value of LET is purely based on its traffic.

  • @cybertech said:
    thanks for tip on bbr.

    after updating kernel, low memory usage and faster network speed.

    What was that 80MB/s before BBR enabled, do you recall?

  • cybertechcybertech Member
    edited December 2018

    @TimboJones said:

    @cybertech said:
    thanks for tip on bbr.

    after updating kernel, low memory usage and faster network speed.

    What was that 80MB/s before BBR enabled, do you recall?

    Cannot remember, but was doing a single download test which was maxed out at 9.3MB/s before kernel upgrade + bbr.

    After doing so it did consistent at around 18MB/s.

    Observed similar results in most vps using 3.10 kernel from centos7.6.

    Thanked by 1TimboJones

    Remember the value of LET is purely based on its traffic.

  • Thanks

    Before: 200 ish mb
    After: 800 ish mb

  • no suggestions about using "hack" + bbr? is combination of both recommended or not?

  • netomxnetomx Member, Moderator

    TimboJones said: people just stopped tweaking because there's trade offs and requires testing

    @hyperblast

  • Setting the TCP window sizes requires estimating the Bandwidth-Delay product (BDP) : https://web.archive.org/web/20080803082218/http://dast.nlanr.net/Guides/GettingStarted/TCP_window_size.html

    To deal with lossy connections, you need to decrease window sizes and to deal with high bandwidth or high round-trip times increase window sizes. Also take RAM into consideration, because these settings apply to each TCP connection.

    For a 512MB VPS running a webserver, I just push up the minimum values to speed up window autonegotiation.

    net.ipv4.tcp_wmem = 10240 87380 12582912
    net.ipv4.tcp_rmem = 4096 87380 12582912
    

    For my home router, I push up tcp_rmem over tcp_wmem instead. There's also net.core.rmem/wmem which applies to queues across all protocols.

    If your kernel supports BBR, definitely enable it.

    Thanked by 2Daniel15 coreflux
  • @hyperblast said:
    no suggestions about using "hack" + bbr? is combination of both recommended or not?

    That's up to you. If your server has a single purpose, you can optimize for that. If it's a general use server, then don't worry about it.

    It really comes down to need, squeezing out the last few percent of possible performance. If you're getting acceptable speeds, call it a day.

    If you're stuck with certain hardware and can't handle concurrent users, then you can optimize and improve that. Especially, if you have tons of available ram.

Sign In or Register to comment.