Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Netcup VPS 1337 - 2 vCores, 8 GB RAM, 160 GB SSD / 80 TB traffic - 4.56 EUR/m
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Netcup VPS 1337 - 2 vCores, 8 GB RAM, 160 GB SSD / 80 TB traffic - 4.56 EUR/m

pbxpbx Member
edited September 2021 in General

Hey

Netcup has a special. As the title says, it's 2 vCores, 8 GB RAM, 160 GB SSD (RAID10) / 80 TB traffic for 4.56/m.

This won't bring back @cociu from the dead, but it should be pretty damn stable and it's pretty nice for this price tag. Links below:

https://www.netcup.de/bestellen/produkt.php?produkt=2778 (german site)
https://www.netcup.eu/bestellen/produkt.php?produkt=2778 (international site)

Enjoy!

Thanked by 3angstrom Falzo msallak1

Comments

  • gianggiang Veteran
    edited September 2021

    Do you know what is the CPU they are using for this VPS?

    NVM, I have found the detail. Specs are same as VPS 1000 G9.

  • no mention of DDR4 ECC. any YABS from someone?

  • gianggiang Veteran
    edited September 2021

    @cybertech said:
    no mention of DDR4 ECC. any YABS from someone?

    @pbx has post a YABS at: https://www.lowendtalk.com/discussion/comment/3207286/#Comment_3207286

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2020-12-29                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Thu 25 Feb 2021 01:33:31 PM CET
    
    Basic System Information:
    ---------------------------------
    Processor  : QEMU Virtual CPU version 2.5+
    CPU cores  : 2 @ 1996.249 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 7.8 GiB
    Swap       : 0.0 KiB
    Disk       : 157.4 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 135.55 MB/s  (33.8k) | 1.84 GB/s    (28.8k)
    Write      | 135.91 MB/s  (33.9k) | 1.85 GB/s    (29.0k)
    Total      | 271.47 MB/s  (67.8k) | 3.70 GB/s    (57.9k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 3.48 GB/s     (6.7k) | 3.60 GB/s     (3.5k)
    Write      | 3.66 GB/s     (7.1k) | 3.84 GB/s     (3.7k)
    Total      | 7.14 GB/s    (13.9k) | 7.44 GB/s     (7.2k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed     
                    |                           |                 |                
    Clouvider       | London, UK (10G)          | 961 Mbits/sec   | 842 Mbits/sec  
    Online.net      | Paris, FR (10G)           | 956 Mbits/sec   | 856 Mbits/sec  
    WorldStream     | The Netherlands (10G)     | 982 Mbits/sec   | 948 Mbits/sec  
    Biznet          | Jakarta, Indonesia (1G)   | busy            | busy           
    Clouvider       | NYC, NY, US (10G)         | 587 Mbits/sec   | 799 Mbits/sec  
    Velocity Online | Tallahassee, FL, US (10G) | 326 Mbits/sec   | 721 Mbits/sec  
    Clouvider       | Los Angeles, CA, US (10G) | 370 Mbits/sec   | 856 Mbits/sec  
    Iveloz Telecom  | Sao Paulo, BR (2G)        | busy            | busy           
    
    iperf3 Network Speed Tests (IPv6):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed     
                    |                           |                 |                
    Clouvider       | London, UK (10G)          | 942 Mbits/sec   | 752 Mbits/sec  
    Online.net      | Paris, FR (10G)           | 790 Mbits/sec   | busy           
    WorldStream     | The Netherlands (10G)     | 968 Mbits/sec   | 903 Mbits/sec  
    Clouvider       | NYC, NY, US (10G)         | 511 Mbits/sec   | 741 Mbits/sec  
    Clouvider       | Los Angeles, CA, US (10G) | 358 Mbits/sec   | 686 Mbits/sec  
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 851                           
    Multi Core      | 1430                          
    Full Test       | https://browser.geekbench.com/v5/cpu/6683922
    
  • same as their Easter offer, lovely little box. I had much worse GB5 score in the beginning, keep in mind it's shared cores and in no way guarantued - just have the right expectations.

    that said, yabs right now:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2021-06-05                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Mon Sep 13 09:37:31 CEST 2021
    
    Basic System Information:
    ---------------------------------
    Processor  : QEMU Virtual CPU version 2.5+
    CPU cores  : 2 @ 1996.250 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 7.8 GiB
    Swap       : 2.0 GiB
    Disk       : 155.4 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 68.56 MB/s   (17.1k) | 845.45 MB/s  (13.2k)
    Write      | 68.76 MB/s   (17.1k) | 849.90 MB/s  (13.2k)
    Total      | 137.32 MB/s  (34.3k) | 1.69 GB/s    (26.4k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 1.60 GB/s     (3.1k) | 1.58 GB/s     (1.5k)
    Write      | 1.69 GB/s     (3.3k) | 1.69 GB/s     (1.6k)
    Total      | 3.30 GB/s     (6.4k) | 3.27 GB/s     (3.2k)
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 739                           
    Multi Core      | 1151                          
    Full Test       | https://browser.geekbench.com/v5/cpu/9805206
    

    hopefully they sell out fast, otherwise I need to buy yet another one ...

    Thanked by 2bdl pbx
  • @giang said: Specs are same as VPS 1000 G9.

    Exactly. Nice little VPS. As @Falzo mentioned CPU ain't dedicated but it's capable to support quite some load anyway :wink:

    Thanked by 1giang
  • Nice offer.

    I found some coupon codes for Netcup.
    You get 5€ discount. (Only new Customers)

    Code:
    36nc15132975950
    36nc15132971772
    36nc14444676128
    36nc151329759510
    36nc15318795872

  • FalzoFalzo Member
    edited September 2021

    @Specsblue said:
    Nice offer.

    I found some coupon codes for Netcup.
    You get 5€ discount. (Only new Customers)

    your affiliate codes don't work with these offers. use mine instead :-P

    @pbx I couldn't resist, however:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2021-06-05                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Mon 13 Sep 2021 11:01:35 AM CEST
    
    Basic System Information:
    ---------------------------------
    Processor  : QEMU Virtual CPU version 2.5+
    CPU cores  : 2 @ 1996.249 MHz
    AES-NI     : ? Enabled
    VM-x/AMD-V : ? Disabled
    RAM        : 7.8 GiB
    Swap       : 0.0 KiB
    Disk       : 157.4 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 67.53 MB/s   (16.8k) | 579.72 MB/s   (9.0k)
    Write      | 67.71 MB/s   (16.9k) | 582.77 MB/s   (9.1k)
    Total      | 135.25 MB/s  (33.8k) | 1.16 GB/s    (18.1k)
               |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ---- 
    Read       | 2.81 GB/s     (5.5k) | 2.96 GB/s     (2.8k)
    Write      | 2.96 GB/s     (5.7k) | 3.16 GB/s     (3.0k)
    Total      | 5.78 GB/s    (11.2k) | 6.12 GB/s     (5.9k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed     
                    |                           |                 |                
    Clouvider       | London, UK (10G)          | 916 Mbits/sec   | 856 Mbits/sec  
    Online.net      | Paris, FR (10G)           | 944 Mbits/sec   | 796 Mbits/sec  
    WorldStream     | The Netherlands (10G)     | 980 Mbits/sec   | 846 Mbits/sec  
    Biznet          | Jakarta, Indonesia (1G)   | busy            | busy           
    Clouvider       | NYC, NY, US (10G)         | 324 Mbits/sec   | 754 Mbits/sec  
    Velocity Online | Tallahassee, FL, US (10G) | 411 Mbits/sec   | 739 Mbits/sec  
    Clouvider       | Los Angeles, CA, US (10G) | 339 Mbits/sec   | 690 Mbits/sec  
    Iveloz Telecom  | Sao Paulo, BR (2G)        | 210 Mbits/sec   | 261 Mbits/sec  
    
    iperf3 Network Speed Tests (IPv6):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed     
                    |                           |                 |                
    Clouvider       | London, UK (10G)          | 856 Mbits/sec   | 905 Mbits/sec  
    Online.net      | Paris, FR (10G)           | busy            | 682 Mbits/sec  
    WorldStream     | The Netherlands (10G)     | 855 Mbits/sec   | 724 Mbits/sec  
    Clouvider       | NYC, NY, US (10G)         | 449 Mbits/sec   | 810 Mbits/sec  
    Clouvider       | Los Angeles, CA, US (10G) | 310 Mbits/sec   | 763 Mbits/sec  
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value                         
                    |                               
    Single Core     | 472                           
    Multi Core      | 701                           
    Full Test       | https://browser.geekbench.com/v5/cpu/9806366
    

    seems I am always unlucky with getting these in regards of how full the nodes are... 🤷‍♂️

    Thanked by 1pbx
  • Already out of stock :(

  • WebProjectWebProject Host Rep, Veteran

    @Specsblue said:
    Nice offer.

    I found some coupon codes for Netcup.
    You get 5€ discount. (Only new Customers)

    Code:
    36nc15132975950
    36nc15132971772
    36nc14444676128
    36nc151329759510
    36nc15318795872

    your codes are for setup fee and servers don't have setup so pointless to have 5€ discount on zero setup!

  • BangumiBangumi Member
    edited September 2021

    Pretty cool. I bought one, and it's nearly the half-price of hetzner's CX31.
    GB5 score comparison here: https://browser.geekbench.com/v5/cpu/compare/9807458?baseline=9778129

    Thanked by 1pbx
  • So fast, it's already out of stock!

  • ArkasArkas Moderator

    Do you have to commit for 12 months at Netcup, or can you get this deal with a month to month arrangement?

  • Netcup has great reviews here but past trauma with some shitty providers stops me from buying yearly deals.

  • @Arkas said:
    Do you have to commit for 12 months at Netcup, or can you get this deal with a month to month arrangement?

    It's clearly a commitment for 12 months

  • Probably not the right topic but did anyone notice speed drops in the last couple of days?
    For me it goes from fullspeed to as low as 2-3Mbit, I'm just not sure where exactly the problem is.

  • @varwww said: past trauma with some shitty providers stops me from buying yearly deals.

    What made me choose Netcup for long term projects is that they aren't the usual shitty provider, but pretty big. They probably are among the best (most stable / solid) providers you can find in this price range.

    @neik said: Probably not the right topic but did anyone notice speed drops in the last couple of days?

    Network? I encountered no issue.

  • @pbx said: Network? I encountered no issue.

    >
    Problably it is something network related (e.g. peering or so). I know my ISP did some work on their network and coincidence or not the issues started around that time.

    Interestingly enough only download from the server seems to be affected, upload always goes around full speed from my ISP to my server.

    Quite strange behavior that I'm trying narrow down but am somehow lost. :-/
    Any hint is much appreciated.

  • @neik said:

    @pbx said: Network? I encountered no issue.

    >
    Problably it is something network related (e.g. peering or so). I know my ISP did some work on their network and coincidence or not the issues started around that time.

    Interestingly enough only download from the server seems to be affected, upload always goes around full speed from my ISP to my server.

    Quite strange behavior that I'm trying narrow down but am somehow lost. :-/
    Any hint is much appreciated.

    first thing would be tracing the routes from both ends and ideally do the same from other locations to eventually find differences along the way. it could be some limitation/filter on either end. and don't forget about tcp bbr and fragmentation of packets and whatnot...

  • MTR from server to me: https://1drv.ms/u/s!AoPn9ceb766mg49I0i4J5Ekg_4A2Bg?e=q0YYd1

    MTR from me to server: https://1drv.ms/u/s!AoPn9ceb766mg49JKKe-u31HFVmLZA?e=cbe3NX

    it could be some limitation/filter on either end. and don't forget about tcp bbr and fragmentation of packets and whatnot...

    What could I do to narrow this down?

  • FalzoFalzo Member
    edited September 2021

    seems to be vodafone/kabelBW? is this by any chance DSlite aka shared IPv4? I had my fair share with troubles on Vodafone often 'fixing' stuff and seeing bad network to different locations like... daily :(

    switching from that grey branded router/modem (afaik Zyxel) to a fritzbox also helped a lot, but it's still congested from time to time and then there is obviously not much you can do about it.
    usually everything using the native IPv6 works more stable and tends to be faster but everything going through IPv4 sucks.

    however if possible I'd try to use something like iperf to test the speeds as well, also to see if it is more a per connection limit and you still can max out the ISPs advertised speed at all. maybe while running those tests spin up an instance at hetzner cloud in parallel to have something to compare to, ideally in nuremberg as netcups server are located there as well (though anexia obviously have their very own peering)

    PS: apart from that your download indeed goes a different route (telia in between) compared to the way back (using telekom), so at least this also could be a reason for seeing different speeds.

    Thanked by 1neik
  • FalzoFalzo Member
    edited September 2021

    out of curiosity just checked the routing from home right now. I am in NRW but as said on vodafone with a comparable IP/subnet as well and the routings look identical to yours. outgoing (from netcup to here) via telia and the other way around via telekom.

    running iperf3 yields some interesting results...
    single connection sending from home easily maxes out the 50Mbit/s max at all times.
    single connection the other way down varies a lot, sometimes maxing out the Gbit/s, sometimes not getting over 10 Mbit/s.

    even with running multiple connections in parallel one can see a few being limited and some not:

    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec  10.2 MBytes  8.53 Mbits/sec   70             sender
    [  4]   0.00-10.00  sec  8.80 MBytes  7.38 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec  10.7 MBytes  8.97 Mbits/sec   55             sender
    [  6]   0.00-10.00  sec  9.68 MBytes  8.12 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec  9.19 MBytes  7.71 Mbits/sec   66             sender
    [  8]   0.00-10.00  sec  8.04 MBytes  6.75 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec   722 MBytes   606 Mbits/sec  271             sender
    [ 10]   0.00-10.00  sec   710 MBytes   596 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec   753 MBytes   631 Mbits/sec  462             sender
    [SUM]   0.00-10.00  sec   737 MBytes   618 Mbits/sec                  receiver
    

    this seems to happen randomly and being some kind of artificial limit, yet I can't tell for now where this comes from or is applied. If I find some time later, I am going to check a hetzner cloud server as said above and probably also recheck the connection to that netcup server from other locations...

    edit: despite my own advise above, I forgot to check about tcp bbr in my own environment 🤦‍♂️ ... did that right now and it wasn't even enabled on that netcup box. did so and rechecked iperf3:

    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   268 MBytes   225 Mbits/sec  8709             sender
    [  4]   0.00-10.00  sec   243 MBytes   204 Mbits/sec                  receiver
    [  6]   0.00-10.00  sec   459 MBytes   385 Mbits/sec  7893             sender
    [  6]   0.00-10.00  sec   438 MBytes   367 Mbits/sec                  receiver
    [  8]   0.00-10.00  sec   250 MBytes   210 Mbits/sec  5637             sender
    [  8]   0.00-10.00  sec   231 MBytes   193 Mbits/sec                  receiver
    [ 10]   0.00-10.00  sec   207 MBytes   174 Mbits/sec  4773             sender
    [ 10]   0.00-10.00  sec   191 MBytes   160 Mbits/sec                  receiver
    [SUM]   0.00-10.00  sec  1.16 GBytes   994 Mbits/sec  27012             sender
    [SUM]   0.00-10.00  sec  1.08 GBytes   925 Mbits/sec                  receiver
    

    voila.

    you might want to start reading here about tcp bbr: https://www.geekbundle.org/linux-tcp-bbr-congestion-control/
    maybe it is that simple already for your issues and not about routing and peering and such.

    Thanked by 1neik
  • this promo ended days ago...

    Thanked by 1virgilmosciski
  • The promotion is long over: Schluss, finito

    Thanked by 1virgilmosciski
  • @Falzo said:
    seems to be vodafone/kabelBW? is this by any chance DSlite aka shared IPv4? I had my fair share with troubles on Vodafone often 'fixing' stuff and seeing bad network to different locations like... daily :(

    switching from that grey branded router/modem (afaik Zyxel) to a fritzbox also helped a lot, but it's still congested from time to time and then there is obviously not much you can do about it.
    usually everything using the native IPv6 works more stable and tends to be faster but everything going through IPv4 sucks.

    however if possible I'd try to use something like iperf to test the speeds as well, also to see if it is more a per connection limit and you still can max out the ISPs advertised speed at all. maybe while running those tests spin up an instance at hetzner cloud in parallel to have something to compare to, ideally in nuremberg as netcups server are located there as well (though anexia obviously have their very own peering)

    PS: apart from that your download indeed goes a different route (telia in between) compared to the way back (using telekom), so at least this also could be a reason for seeing different speeds.

    Yeah, KabelBW is correct but I'm not on a DSlite line, I have a native address for both IPv4 and IPv6. Fritzbox 6490 has also been running for years now.

    I also contacted the support in parallel and from the mtr's I sent them they seem to believe that it is related to their changed routing - see also here: https://www.netcup-status.de/category/netzwerk/

    Nevertheless, thank you for all the effort, I will definitely look deeper into it.
    If any doubts came up, I will come back to you. :-)
    Thanks so far.

    Thanked by 2Falzo pbx
  • pbxpbx Member
    edited September 2021

    As other have said it's over; I'd add that these generally don't last long (and the smaller the VPS, the quicker it's out of stock). If you want one in the future they have an link to create an account without ordering anything before hand so that you can quickly grab a good deal if/when it becomes available. Good luck!

  • jsgjsg Member, Resident Benchmarker

    @Falzo said:
    out of curiosity just checked the routing from home right now. I am in NRW but as said on vodafone with a comparable IP/subnet as well and the routings look identical to yours. outgoing (from netcup to here) via telia and the other way around via telekom.

    running iperf3 yields some interesting results...
    single connection sending from home easily maxes out the 50Mbit/s max at all times.
    single connection the other way down varies a lot, sometimes maxing out the Gbit/s, sometimes not getting over 10 Mbit/s.

    even with running multiple connections in parallel one can see a few being limited and some not:

    - - - - - - - - - - - - - - - - - - - - - - - - -
    > [ ID] Interval           Transfer     Bandwidth       Retr
    > [  4]   0.00-10.00  sec  10.2 MBytes  8.53 Mbits/sec   70             sender
    > [  4]   0.00-10.00  sec  8.80 MBytes  7.38 Mbits/sec                  receiver
    > [  6]   0.00-10.00  sec  10.7 MBytes  8.97 Mbits/sec   55             sender
    > [  6]   0.00-10.00  sec  9.68 MBytes  8.12 Mbits/sec                  receiver
    > [  8]   0.00-10.00  sec  9.19 MBytes  7.71 Mbits/sec   66             sender
    > [  8]   0.00-10.00  sec  8.04 MBytes  6.75 Mbits/sec                  receiver
    > [ 10]   0.00-10.00  sec   722 MBytes   606 Mbits/sec  271             sender
    > [ 10]   0.00-10.00  sec   710 MBytes   596 Mbits/sec                  receiver
    > [SUM]   0.00-10.00  sec   753 MBytes   631 Mbits/sec  462             sender
    > [SUM]   0.00-10.00  sec   737 MBytes   618 Mbits/sec                  receiver
    > 

    this seems to happen randomly and being some kind of artificial limit, yet I can't tell for now where this comes from or is applied. If I find some time later, I am going to check a hetzner cloud server as said above and probably also recheck the connection to that netcup server from other locations...

    edit: despite my own advise above, I forgot to check about tcp bbr in my own environment 🤦‍♂️ ... did that right now and it wasn't even enabled on that netcup box. did so and rechecked iperf3:

    - - - - - - - - - - - - - - - - - - - - - - - - -
    > [ ID] Interval           Transfer     Bandwidth       Retr
    > [  4]   0.00-10.00  sec   268 MBytes   225 Mbits/sec  8709             sender
    > [  4]   0.00-10.00  sec   243 MBytes   204 Mbits/sec                  receiver
    > [  6]   0.00-10.00  sec   459 MBytes   385 Mbits/sec  7893             sender
    > [  6]   0.00-10.00  sec   438 MBytes   367 Mbits/sec                  receiver
    > [  8]   0.00-10.00  sec   250 MBytes   210 Mbits/sec  5637             sender
    > [  8]   0.00-10.00  sec   231 MBytes   193 Mbits/sec                  receiver
    > [ 10]   0.00-10.00  sec   207 MBytes   174 Mbits/sec  4773             sender
    > [ 10]   0.00-10.00  sec   191 MBytes   160 Mbits/sec                  receiver
    > [SUM]   0.00-10.00  sec  1.16 GBytes   994 Mbits/sec  27012             sender
    > [SUM]   0.00-10.00  sec  1.08 GBytes   925 Mbits/sec                  receiver
    > 

    voila.

    you might want to start reading here about tcp bbr: https://www.geekbundle.org/linux-tcp-bbr-congestion-control/
    maybe it is that simple already for your issues and not about routing and peering and such.

    Yep, netcup sells nice VMs, I still have and very much like my VDS with them which was and is much cheaper than e.g. Contabo VDSs and about in the same ballpark in terms of performance.
    And yes, one needs a 12 month commitment with netcup - but one also gets a really sweet and tasty deal. Good stuff for a great price indeed.

    As for bbr: I very much doubt that switching the CCA from the OS standard (which isn't bad at all on linux) to bbr does improve performance as much as your test suggests. I did a whole series of tests of common (including modern) CCAs and found that one gets differences in performance but not really a lot.
    That is not meant to doubt your testing or your findings. I rather presume that your switching the CCA somehow defeats some netcup limiting or that netcup has made a CCA choice in their whole network that just happens to go extremely well with bbr.
    Side note: I found in my rather extensive testing under interesting conditions (Australia) that usually changing the CCA only significantly improves performance in a setting that happens to match a given CCA's "goal", e.g. using some CCA in Australia (looong distances, often flaky connectivity, and often slow speed) may indeed improve performance a lot. Using that same CCA in say Europe however does not and actually may decrease performance (and vice versa).

    Generally I'd recommend to just use what is your OS's default, except you (or your target) happens to be somewhat special (like e.g. Australia). But still, I find your CCA findings at netcup very interesting.

    Thanked by 1pbx
  • @jsg said: That is not meant to doubt your testing or your findings. I rather presume that your switching the CCA somehow defeats some netcup limiting or that netcup has made a CCA choice in their whole network that just happens to go extremely well with bbr.

    totally agreed. I didn't dig deeper into it and barely did 5-10 test runs (only iperf3) , per different setting e.g. amounts of parallel connections. I usually don't change to BBR or such at all, yet the outcome was reproducable at least with this setup and within iperf3.

    even more interesting, I did the same on a hetzner cloud instance to have a comparison and guess what, I found a very comparable behaviour.

    now to make things more complex I did it also on a hetzner dedi, there was no such random per connection limit to be seen from the beginning and obviously changing to bbr didn't change anything at all.

    makes me think could be a simple thing in qemus network/bridging implementation which, as you said, goes well with bbr...

    but whatever one wants to make out of it - obviously you want to take all these results with a grain of salt.

  • jsgjsg Member, Resident Benchmarker

    @Falzo said:

    @jsg said: That is not meant to doubt your testing or your findings. I rather presume that your switching the CCA somehow defeats some netcup limiting or that netcup has made a CCA choice in their whole network that just happens to go extremely well with bbr.

    totally agreed. I didn't dig deeper into it and barely did 5-10 test runs (only iperf3) , per different setting e.g. amounts of parallel connections. I usually don't change to BBR or such at all, yet the outcome was reproducable at least with this setup and within iperf3.

    even more interesting, I did the same on a hetzner cloud instance to have a comparison and guess what, I found a very comparable behaviour.

    now to make things more complex I did it also on a hetzner dedi, there was no such random per connection limit to be seen from the beginning and obviously changing to bbr didn't change anything at all.

    makes me think could be a simple thing in qemus network/bridging implementation which, as you said, goes well with bbr...

    but whatever one wants to make out of it - obviously you want to take all these results with a grain of salt.

    Psshhh! Keep it quiet ... or do you want netcup to get wind of it and make your nice improvement go away?

    Anyway, a good find!

  • The issue I was having was definitely linked to the DE-CIX issue it went away as soon as the support told me DE-CIX is live again (checked it myself with traceroute) and it became bad again yesterday in the evening when they were forced to move the peering away from DE-CIX (see https://www.netcup-status.de/category/netzwerk/) to Telia again.

    It's nice that they change the peering so that the servers can still be reached but they should have a look at their backup-peering as it is quite shitty and not really acceptable.

    Thanked by 1pbx
Sign In or Register to comment.