Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


TCP BBR
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

TCP BBR

I'm sure that many of you know that online.net's AMS1 location offers "interesting" routes to some locations. For instance, I get better speeds proxying through my server in DC2 than going directly (Switzerland -> AMS is slower than Switzerland -> AMS -> Paris -> AMS).

I have recently changed /proc/sys/net/ipv4/tcp_congestion_control to BBR on most of my machines and noticed a slight improvement for my servers at OVH. But the server in AMS went from max. 4MB/s to almost 50MB/s. Resetting the congestion control to an older algorithm (such as Vegas) causes the speed to drop significantly.

Is it just me or is anybody else affected by slow speeds in AMS? I'm curious to know whether I'm the only one who benefits that much (10x speed).

Comments

  • Yes. You are the only one who benefits from a better possible throughput. Everyone else enjoys sitting and waiting.

  • It's been widely used in Chinese community to VPS overseas to accelerate the speed...

    Thanked by 1inthecloudblog
  • rm_rm_ IPv6 Advocate, Veteran
    edited December 2017

    Remember that when using BBR you also need to switch qdisc (the network interface queue algorithm) from the default pfifo_fast to fq. Else bbr does not work correctly and will cause excessive packet loss. And this needs to be done before the network interface is up, which is the simplest to achieve during a reboot.

    So to apply both settings on boot-up, you can add to /etc/sysctl.conf:

    net.core.default_qdisc=fq
    net.ipv4.tcp_congestion_control=bbr

    If you do not want to reboot and want a good drop-in replacement algorithm which doesn't require a reboot to try out, go with illinois:

    sysctl -p net.ipv4.tcp_congestion_control=illinois

    In my tests it's the next best thing after bbr, and is supported on old kernel versions too.

    Back to the default one would be:

    sysctl -p net.ipv4.tcp_congestion_control=cubic

    And lastly, yes, what brought me to experimenting with these algorithms back some time ago, was when I had an Online.net server with crappy upload speeds (along with a few more servers of the same offer, with good upload). Switching to illinois helped immensely. "bbr" did not exist yet back then.

  • Interesting, I tried fq_codel and pfifo_fast, but never noticed packet loss using either of them. Do you have a source for that information? As far as I know, it is no longer necessary to use fq (according to GitHub docs).

    Glad to hear this is not an isolated problem. Were you able to achieve acceptable speeds on online.net's servers?

  • rm_rm_ IPv6 Advocate, Veteran
    edited December 2017

    Jari said: Interesting, I tried fq_codel and pfifo_fast, but never noticed packet loss using either of them. Do you have a source for that information?

    E.g.here https://news.ycombinator.com/item?id=14814530

    Seems like fq is not required anymore since kernel 4.13. But I still run 4.9 everywhere, so I tuned out on that bit.

    Jari said: Were you able to achieve acceptable speeds on online.net's servers?

    In my case I got a bunch of lucky and unlucky servers, with regard to upload speeds. Changing algorithms helped the unlucky ones somewhat, but one could say it's a band aid, and the proper solution was to cancel or try swapping those somehow. In your case perhaps it's a general situation across Online, and as you don't have any other things to try, just using bbr may be the answer. I would say 50 MB/sec speeds are more than acceptable.

Sign In or Register to comment.