Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


DACENTEC >>> Dell 2xL5420 16GB 2x2TB SATA - $35/mo - 24/7/365 On Site Support - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

DACENTEC >>> Dell 2xL5420 16GB 2x2TB SATA - $35/mo - 24/7/365 On Site Support

2

Comments

  • dfroedfroe Member, Host Rep
    edited March 2019

    For those who are interested, nench and cryptsetup benchmark on that old Opteron 1385.
    Network looks pretty decent and support was great during initial setup. Good deal for a cheap storage box. Thanks for sharing!

    nench:

    -------------------------------------------------
     nench.sh v2019.03.01 -- https://git.io/nench.sh
     benchmark timestamp:    2019-03-30 16:26:06 UTC
    -------------------------------------------------
    
    Processor:    Quad-Core AMD Opteron(tm) Processor 1385
    CPU cores:    4
    Frequency:    2000.000 MHz
    RAM:          7.8G
    Swap:         4.0G
    Kernel:       Linux 4.9.0-8-amd64 x86_64
    
    Disks:
    sda    1.8T  HDD
    sdb    1.8T  HDD
    sdc  931.5G  HDD
    sdd  931.5G  HDD
    sde  931.5G  HDD
    sdf  931.5G  HDD
    
    CPU: SHA256-hashing 500 MB
        4.198 seconds
    CPU: bzip2-compressing 500 MB
        7.753 seconds
    CPU: AES-encrypting 500 MB
        4.077 seconds
    
    ioping: seek rate
        min/avg/max/mdev = 118.4 us / 3.29 ms / 23.9 ms / 3.27 ms
    ioping: sequential read speed
        generated 3.04 k requests in 5.00 s, 760.2 MiB, 608 iops, 152.0 MiB/s
    
    dd: sequential write speed
        1st run:    122.07 MiB/s
        2nd run:    120.16 MiB/s
        3rd run:    123.98 MiB/s
        average:    122.07 MiB/s
    
    IPv4 speedtests
        your IPv4:    199.255.xxx.xxx
    
        Cachefly CDN:         100.17 MiB/s
        Leaseweb (NL):        9.29 MiB/s
        Softlayer DAL (US):   46.06 MiB/s
        Online.net (FR):      16.63 MiB/s
        OVH BHS (CA):         27.87 MiB/s
    
    IPv6 speedtests
        your IPv6:    2607:5600:xxxx:xxxx
    
        Leaseweb (NL):        8.47 MiB/s
        Softlayer DAL (US):   0.00 MiB/s
        Online.net (FR):      16.71 MiB/s
        OVH BHS (CA):         38.50 MiB/s
    

    cryptsetup:

    # Tests are approximate using memory only (no storage IO).
    PBKDF2-sha1       582542 iterations per second for 256-bit key
    PBKDF2-sha256     686240 iterations per second for 256-bit key
    PBKDF2-sha512     500274 iterations per second for 256-bit key
    PBKDF2-ripemd160  537180 iterations per second for 256-bit key
    PBKDF2-whirlpool  348595 iterations per second for 256-bit key
    #  Algorithm | Key |  Encryption |  Decryption
         aes-cbc   128b   163.5 MiB/s   183.8 MiB/s
     serpent-cbc   128b    67.2 MiB/s   178.8 MiB/s
     twofish-cbc   128b   156.4 MiB/s   188.9 MiB/s
         aes-cbc   256b   129.5 MiB/s   141.2 MiB/s
     serpent-cbc   256b    66.3 MiB/s   177.0 MiB/s
     twofish-cbc   256b   156.8 MiB/s   189.5 MiB/s
         aes-xts   256b   179.5 MiB/s   179.2 MiB/s
     serpent-xts   256b   160.2 MiB/s   157.8 MiB/s
     twofish-xts   256b   174.3 MiB/s   172.1 MiB/s
         aes-xts   512b   139.4 MiB/s   138.9 MiB/s
     serpent-xts   512b   158.1 MiB/s   157.6 MiB/s
     twofish-xts   512b   174.2 MiB/s   172.2 MiB/s
    
    Thanked by 2eol Dyna
  • hacktekhacktek Member
    edited March 2019

    Dacentec network is good till you get nullrouted cause you went over 400 Mbps. It happened to me on those old opterons, I'm willing to bet it will happen to you.

    Also, this was just with me downloading stuff to the server, I wasn't been attacked.

  • @hacktek said:
    Dacentec network is good till you get nullrouted cause you went over 400 Mbps. It happened to me on those old opterons, I'm willing to bet it will happen to you.

    Also, this was just with me downloading stuff to the server, I wasn't been attacked.

    I want to know how the hell you managed this on one of their 138x shitboxes. Those can barely even software RAID1 without shitting themselves.

  • @dfroe said:
    IPv6 speedtests
    your IPv6: 2607:5600:xxxx:xxxx

    David how you got IPv6 for operton?
    I just see " /30 of IPv4 Addresses " in panel...
    You write to support to enable ipv6?

  • hacktekhacktek Member
    edited March 2019

    @Letzien said:

    @hacktek said:
    Dacentec network is good till you get nullrouted cause you went over 400 Mbps. It happened to me on those old opterons, I'm willing to bet it will happen to you.

    Also, this was just with me downloading stuff to the server, I wasn't been attacked.

    I want to know how the hell you managed this on one of their 138x shitboxes. Those can barely even software RAID1 without shitting themselves.

    ZFS stripe with the 4 1 TB drives and dedupe off

  • dfroedfroe Member, Host Rep

    @SashkaPro said:
    David how you got IPv6 for operton?
    I just see " /30 of IPv4 Addresses " in panel...
    You write to support to enable ipv6?

    Yep. As their network supports native IPv6, I just asked them friendly. Sometimes that helps. :) A few minutes later they sent me my IPv6 data.

    @Letzien said:
    I want to know how the hell you managed this on one of their 138x shitboxes. Those can barely even software RAID1 without shitting themselves.

    At least those CPUs should be more capable than those Kimsufi Atoms.
    My intention was to create a ZFS RAIDz1 on LUKS over 6x 1 TB for cold backup storage.
    I do not expect any performance wonders but I'd say >10 MiB/s should be realistic.
    I am currently wiping the disks, can post some results tomorrow.
    I already have enough idling servers. I wanna see those old rusty transistors burn. :smiley:

  • sinsin Member

    @hacktek said:
    Dacentec network is good till you get nullrouted cause you went over 400 Mbps. It happened to me on those old opterons, I'm willing to bet it will happen to you.

    Also, this was just with me downloading stuff to the server, I wasn't been attacked.

    They have a DDoS protection option for $10 now I believe.

  • dfroedfroe Member, Host Rep

    And here is a short 'dd' test after setting up ZFS RAIDz1 over 6x 1 TB partitions which are encrypted with LUKS (twofish-xts-plain64). ZFS caching has been disabled to ensure data is bypassing memory. All four CPU cores are maxed out during test. Numbers are stable during multiple runs.

    write (200 MB/s)

    $ dd if=/dev/zero of=testfile bs=10G count=1 oflag=dsync
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB, 2.0 GiB) copied, 10.2524 s, 209 MB/s
    

    read (180 MB/s)

    $ dd if=testfile of=/dev/null bs=10G
    0+1 records in
    0+1 records out
    2147479552 bytes (2.1 GB, 2.0 GiB) copied, 11.9222 s, 180 MB/s
    

    As expected the old CPU is the bottleneck, but I do not need to saturate >100 MB/s.

    Network seems to be throttled somewhere at
    ~250 Mbit/s.

    $ iperf3 -p $((5200+(RANDOM%10))) -c ping.online.net
    Connecting to host ping.online.net, port 5205
    [  4] local 199.255.xxx.xxx port 55734 connected to 62.210.18.40 port 5205
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  5.44 MBytes  45.6 Mbits/sec    0   1.14 MBytes
    [  4]   1.00-2.00   sec  22.7 MBytes   191 Mbits/sec    0   7.51 MBytes
    [  4]   2.00-3.00   sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    [  4]   3.00-4.00   sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    [  4]   4.00-5.00   sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    [  4]   5.00-6.00   sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    [  4]   6.00-7.00   sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    [  4]   7.00-8.00   sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    [  4]   8.00-9.00   sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    [  4]   9.00-10.00  sec  33.9 MBytes   285 Mbits/sec    0   7.88 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   300 MBytes   251 Mbits/sec    0             sender
    [  4]   0.00-10.00  sec   299 MBytes   251 Mbits/sec                  receiver
    
    # iperf3 -p $((5200+(RANDOM%10))) -c ping-ams1.online.net
    Connecting to host ping-ams1.online.net, port 5206
    [  4] local 199.255.xxx.xxx port 59238 connected to 163.172.208.7 port 5206
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  5.52 MBytes  46.3 Mbits/sec    0   1.66 MBytes
    [  4]   1.00-2.00   sec  30.2 MBytes   253 Mbits/sec    0   7.51 MBytes
    [  4]   2.00-3.00   sec  30.2 MBytes   253 Mbits/sec    0   7.86 MBytes
    [  4]   3.00-4.00   sec  30.2 MBytes   253 Mbits/sec    0   7.87 MBytes
    [  4]   4.00-5.00   sec  30.2 MBytes   253 Mbits/sec    0   7.88 MBytes
    [  4]   5.00-6.00   sec  30.2 MBytes   253 Mbits/sec    0   7.88 MBytes
    [  4]   6.00-7.00   sec  30.2 MBytes   253 Mbits/sec    0   7.88 MBytes
    [  4]   7.00-8.00   sec  30.2 MBytes   253 Mbits/sec    0   7.88 MBytes
    [  4]   8.00-9.00   sec  30.2 MBytes   253 Mbits/sec    0   7.88 MBytes
    [  4]   9.00-10.00  sec  30.2 MBytes   253 Mbits/sec    0   7.88 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   277 MBytes   233 Mbits/sec    0             sender
    [  4]   0.00-10.00  sec   277 MBytes   233 Mbits/sec                  receiver
    
    $ iperf3 -p $((5200+(RANDOM%10))) -c bouygues.iperf.fr
    Connecting to host bouygues.iperf.fr, port 5204
    [  4] local 2607:5600:xxx port 34092 connected to 2001:860:deff:1000::2 port 5204
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  7.36 MBytes  61.7 Mbits/sec    0   3.47 MBytes
    [  4]   1.00-2.00   sec  32.7 MBytes   274 Mbits/sec    1   2.51 MBytes
    [  4]   2.00-3.00   sec  33.9 MBytes   285 Mbits/sec    0   5.26 MBytes
    [  4]   3.00-4.00   sec  34.0 MBytes   285 Mbits/sec    0   5.26 MBytes
    [  4]   4.00-5.00   sec  34.0 MBytes   285 Mbits/sec    0   5.26 MBytes
    [  4]   5.00-6.00   sec  34.0 MBytes   285 Mbits/sec    0   5.26 MBytes
    [  4]   6.00-7.00   sec  34.0 MBytes   285 Mbits/sec    0   5.26 MBytes
    [  4]   7.00-8.00   sec  34.0 MBytes   285 Mbits/sec    0   5.26 MBytes
    [  4]   8.00-9.00   sec  34.0 MBytes   285 Mbits/sec    0   5.26 MBytes
    [  4]   9.00-10.00  sec  34.0 MBytes   285 Mbits/sec    0   5.26 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   312 MBytes   262 Mbits/sec    1             sender
    [  4]   0.00-10.00  sec   312 MBytes   262 Mbits/sec                  receiver
    
    $ iperf3 -p 5002 -c speedtest.serverius.net
    Connecting to host speedtest.serverius.net, port 5002
    [  4] local 2607:5600:xxx port 54498 connected to 2a00:1ca8:33::2 port 5002
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  4.99 MBytes  41.8 Mbits/sec    0    770 KBytes
    [  4]   1.00-2.00   sec  20.2 MBytes   170 Mbits/sec    0   6.01 MBytes
    [  4]   2.00-3.00   sec  31.2 MBytes   262 Mbits/sec    0   6.01 MBytes
    [  4]   3.00-4.00   sec  30.2 MBytes   254 Mbits/sec    0   6.01 MBytes
    [  4]   4.00-5.00   sec  29.5 MBytes   247 Mbits/sec    0   6.01 MBytes
    [  4]   5.00-6.00   sec  31.5 MBytes   264 Mbits/sec    0   6.01 MBytes
    [  4]   6.00-7.00   sec  30.2 MBytes   253 Mbits/sec    0   6.01 MBytes
    [  4]   7.00-8.00   sec  29.5 MBytes   247 Mbits/sec    0   6.01 MBytes
    [  4]   8.00-9.00   sec  31.2 MBytes   262 Mbits/sec    0   6.01 MBytes
    [  4]   9.00-10.00  sec  30.2 MBytes   254 Mbits/sec    0   6.01 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   269 MBytes   225 Mbits/sec    0             sender
    [  4]   0.00-10.00  sec   268 MBytes   225 Mbits/sec                  receiver
    

    => 4.2 TiB usable RAIDz1 storage plus anoher mirrored TB for OS, with 100+ MB/s local I/O and 30+ MB/s network throughput at $25. That's about 5 EUR/TiB (after redundancy) on a dedicated server.
    I can't complain. :)

    Thanked by 2sibaper uptime
  • I refuse to reset my Paypal password. Repeat.I refuse to reset my Paypal password. Repeat. I refuse to reset my Paypal password.

  • @poisson said:
    I refuse to reset my Paypal password. Repeat.I refuse to reset my Paypal password. Repeat. I refuse to reset my Paypal password.

    It really is an awful shitbox that under performs most OVZ based storage nodes. Go buy @KuJoe's not-OVZ OpenVZ7.

    Thanked by 1vimalware
  • FoulFoul Member

    When I had dacentec;

    I'd always get nullrouted and all I was doing on the server was streaming childhood home videos via Plex.

    I got tired of getting nullrouted and having to ticket support every SINGLE day.

    Not worth it.

  • dfroedfroe Member, Host Rep

    @Letzien said:
    It really is an awful shitbox that under performs most OVZ based storage nodes. Go buy KuJoe's not-OVZ OpenVZ7.

    I feel most comfortable when I can use LUKS encryption and zfs send/receive over SSH to transfer incremental filesystem backups. I doubt this will work with OVZ, thus I prefer KVM or bare metal.

    @Foul said:
    When I had dacentec;
    I'd always get nullrouted

    Let's see whether I will run into similiar issue which of course would be a show stopper. The server was ordered with 1 Gbps port and 10 TB traffic included. Optionally it was possibe to upgrade traffic or go for an unmetered 100 Mbps port. So as long as staying within monthly 10 TB traffic budget, I do not see a reason why they should nullroute customers. There is nothing mentioned about getting nullrouted when going over a certain network throughput level in their ToS either. That's a bit weird.

  • I may be able to shed more light on the nullrouting issue.

    I had a server at Dacentec for a couple of years, and the only issue I had was getting nullrouted when using rclone to transfer files to various cloud storage providers. Apparently, the trouble was not with the bandwidth used, but with the packet rate. Dacentec has rather low tolerance for transfers that use a high packet rate, and their systems will block you for abuse if you hit their limit.

    I asked their support people how I can avoid this, but they told me that it's a software issue, so they can't help. My theory is that the onboard network hardware on their older motherboards, or perhaps the corresponding drivers, may not support the use of jumbo packets. For this reason, they create what looks like a network flood when dealing with high bandwidth transfers. When I've tested similar transfers on newer hardware at other datacenters, I see just a fraction of the packet rate for the same transfer speed.

    Anyway, I have no idea how to fix this, and I'm certainly not willing to play around with network drivers on otherwise stable Linux distros like CentOS 7, so I never did any further troubleshooting.

  • hacktekhacktek Member
    edited April 2019

    ^ Yup this is similar to what happened to me except I was downloading, not uploading. I had to rate limit my download to 300 Mbps which is of course ridiculous for a 1 Gbps port, even with the lowly 10 TB max transfer per month.

    Also, the hardware was quite old. I believe most disks had like 50000 hours on them. I also had issues with one of the drives in what turned out to be a faulty SATA cable. Support was top notch however, I can't fault that at all. For everything else my opinion is there's better providers our there (unless like I said before this is mostly a cold storage box in which case it should be fine).

  • @hacktek said:
    I also had issues with one of the drives in what turned out to be a faulty SATA cable.

    Ah, I think I got your server.

    ata4: EH in SWNCQ mode,QC:qc_active 0xFC sactive 0xFC
    ata4: SWNCQ:qc_active 0x4 defer_bits 0xF8 last_issue_tag 0x2
      dhfis 0x4 dmafis 0x4 sdbfis 0x0
    ata4: ATA_REG 0x41 ERR_REG 0x84
    ata4: tag : dhfis dmafis sdbfis sactive
    ata4: tag 0x2: 1 1 0 1  
    ata4.00: exception Emask 0x1 SAct 0xfc SErr 0x400000 action 0x6 frozen
    ata4.00: Ata error. fis:0x21
    ata4: SError: { Handshk }
    ata4.00: failed command: WRITE FPDMA QUEUED
    ata4.00: cmd 61/00:10:80:bb:6f/04:00:77:00:00/40 tag 2 ncq 524288 out
             res 41/84:10:80:bb:6f/84:00:77:00:00/40 Emask 0x10 (ATA bus error)
    ata4.00: status: { DRDY ERR }
    ata4.00: error: { ICRC ABRT }
    ata4.00: failed command: WRITE FPDMA QUEUED
    ata4.00: cmd 61/00:18:80:bf:6f/01:00:77:00:00/40 tag 3 ncq 131072 out
             res 41/84:10:80:bb:6f/84:00:77:00:00/40 Emask 0x10 (ATA bus error)
    ata4.00: status: { DRDY ERR }
    ata4.00: error: { ICRC ABRT }
    ata4.00: failed command: WRITE FPDMA QUEUED
    ata4.00: cmd 61/00:20:80:c0:6f/02:00:77:00:00/40 tag 4 ncq 262144 out
             res 41/84:10:80:bb:6f/84:00:77:00:00/40 Emask 0x10 (ATA bus error)
    ata4.00: status: { DRDY ERR }
    ata4.00: error: { ICRC ABRT }
    ata4.00: failed command: WRITE FPDMA QUEUED
    ata4.00: cmd 61/00:28:80:c2:6f/03:00:77:00:00/40 tag 5 ncq 393216 out
             res 41/84:10:80:bb:6f/84:00:77:00:00/40 Emask 0x10 (ATA bus error)
    ata4.00: status: { DRDY ERR }
    ata4.00: error: { ICRC ABRT }
    ata4.00: failed command: WRITE FPDMA QUEUED
    ata4.00: cmd 61/00:30:80:c5:6f/04:00:77:00:00/40 tag 6 ncq 524288 out
             res 41/84:10:80:bb:6f/84:00:77:00:00/40 Emask 0x10 (ATA bus error)
    ata4.00: status: { DRDY ERR }
    ata4.00: error: { ICRC ABRT }
    ata4.00: failed command: WRITE FPDMA QUEUED
    ata4.00: cmd 61/80:38:80:c9:6f/01:00:77:00:00/40 tag 7 ncq 196608 out
             res 41/84:10:80:bb:6f/84:00:77:00:00/40 Emask 0x10 (ATA bus error)
    ata4.00: status: { DRDY ERR }
    ata4.00: error: { ICRC ABRT }
    ata4: hard resetting link
    ata4: nv: skipping hardreset on occupied port
    ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    ata4.00: configured for UDMA/133
    ata4: EH complete
    

    smartctl show disk is good.

  • Not my server cause they fixed it but they might have reused the cable LOL

    Thanked by 1eol
  • dfroedfroe Member, Host Rep

    I need to correct my network benchmark which I posted here recently and where I assumed a bandwidth limitation at about 285 Mbps.

    It turned out that this limited throughput was caused by to small TCP window size in Debian defaults which couldn't max out 'long fat links'.

    I now tweaked the TCP window like this:

    net.core.rmem_max=16777216
    net.core.wmem_max=16777216
    net.ipv4.tcp_rmem=4096 87380 16777216
    net.ipv4.tcp_wmem=4096 65536 16777216
    

    Afterwards iperf shows up to 900 Mbps to Europe which is really great.

    $ iperf3 -p $((5200+(RANDOM%10))) -c ping.online.net
    Connecting to host ping.online.net, port 5203
    [  4] local 199.255.xxx.xxx port 50402 connected to 62.210.18.40 port 5203
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  7.18 MBytes  60.2 Mbits/sec    2   1.08 MBytes
    [  4]   1.00-2.00   sec  50.0 MBytes   420 Mbits/sec    0   13.4 MBytes
    [  4]   2.00-3.00   sec   108 MBytes   902 Mbits/sec    0   13.4 MBytes
    [  4]   3.00-4.00   sec   108 MBytes   902 Mbits/sec    0   13.4 MBytes
    [  4]   4.00-5.00   sec   108 MBytes   902 Mbits/sec    0   13.4 MBytes
    [  4]   5.00-6.00   sec   108 MBytes   902 Mbits/sec    0   13.4 MBytes
    [  4]   6.00-7.00   sec   108 MBytes   901 Mbits/sec    0   13.4 MBytes
    [  4]   7.00-8.00   sec   108 MBytes   902 Mbits/sec    0   13.4 MBytes
    [  4]   8.00-9.00   sec   108 MBytes   902 Mbits/sec    0   13.4 MBytes
    [  4]   9.00-10.00  sec   109 MBytes   911 Mbits/sec    0   13.4 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   918 MBytes   770 Mbits/sec    2             sender
    [  4]   0.00-10.00  sec   917 MBytes   769 Mbits/sec                  receiver
    
    $ iperf3 -p $((5200+(RANDOM%10))) -c ping-ams1.online.net
    Connecting to host ping-ams1.online.net, port 5207
    [  4] local 199.255.xxx.xxx port 33682 connected to 163.172.208.7 port 5207
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  14.9 MBytes   125 Mbits/sec    0   3.05 MBytes
    [  4]   1.00-2.00   sec  80.0 MBytes   671 Mbits/sec    0   12.1 MBytes
    [  4]   2.00-3.00   sec  98.8 MBytes   828 Mbits/sec    0   12.1 MBytes
    [  4]   3.00-4.00   sec  97.5 MBytes   818 Mbits/sec    0   12.7 MBytes
    [  4]   4.00-5.00   sec  97.5 MBytes   818 Mbits/sec    0   12.7 MBytes
    [  4]   5.00-6.00   sec  98.8 MBytes   828 Mbits/sec    0   12.7 MBytes
    [  4]   6.00-7.00   sec  97.5 MBytes   819 Mbits/sec    0   12.7 MBytes
    [  4]   7.00-8.00   sec  98.8 MBytes   828 Mbits/sec    0   12.7 MBytes
    [  4]   8.00-9.00   sec  97.5 MBytes   818 Mbits/sec    2   12.7 MBytes
    [  4]   9.00-10.00  sec  98.8 MBytes   828 Mbits/sec    0   12.7 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   880 MBytes   738 Mbits/sec    2             sender
    [  4]   0.00-10.00  sec   878 MBytes   736 Mbits/sec                  receiver
    
    $ iperf3 -p $((5200+(RANDOM%10))) -c bouygues.iperf.fr
    Connecting to host bouygues.iperf.fr, port 5208
    [  4] local 2607:5600:xxx port 40030 connected to 2001:860:deff:1000::2 port 5208
    [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
    [  4]   0.00-1.00   sec  15.9 MBytes   133 Mbits/sec    0   3.33 MBytes
    [  4]   1.00-2.00   sec  58.8 MBytes   493 Mbits/sec    0   12.9 MBytes
    [  4]   2.00-3.00   sec  71.2 MBytes   597 Mbits/sec    0   12.9 MBytes
    [  4]   3.00-4.00   sec  71.2 MBytes   598 Mbits/sec    0   12.9 MBytes
    [  4]   4.00-5.00   sec  72.5 MBytes   608 Mbits/sec    0   12.9 MBytes
    [  4]   5.00-6.00   sec  61.2 MBytes   514 Mbits/sec    1   9.03 MBytes
    [  4]   6.00-7.00   sec  71.2 MBytes   598 Mbits/sec    0   9.03 MBytes
    [  4]   7.00-8.00   sec  71.2 MBytes   598 Mbits/sec    0   9.03 MBytes
    [  4]   8.00-9.00   sec  71.2 MBytes   598 Mbits/sec    0   9.03 MBytes
    [  4]   9.00-10.00  sec  66.2 MBytes   556 Mbits/sec    1   6.34 MBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Retr
    [  4]   0.00-10.00  sec   631 MBytes   529 Mbits/sec    2             sender
    [  4]   0.00-10.00  sec   624 MBytes   523 Mbits/sec                  receiver
    

    I have also asked their support about their null routing policy. They confirmed that they automatically place null routes when DDoS is detected. They didn't tell more details about how they define 'DDoS attack' but mentioned that null routing shouldn't occur for regular legit traffic even at 1 Gbps.
    So let's see how this story will continue, however my first impression is better than expected.

  • @hacktek said:
    Not my server cause they fixed it but they might have reused the cable LOL

    my yesterday problem was solved (changed sata cable) by support yesterday for 15 mins after ticket.
    If to be honest I think that at least support stuff costs more then that $25. They just great

  • vishvish Member

    @dfroe said:
    I have also asked their support about their null routing policy. They confirmed that they automatically place null routes when DDoS is detected. They didn't tell more details about how they define 'DDoS attack' but mentioned that null routing shouldn't occur for regular legit traffic even at 1 Gbps.

    I was null routed while restoring a backup from google drive via rclone. I had to use the bandwidth limit switch and set it to something like 900Mbps. They said I was downloading at around 1.4Gbps when the null route kicked in.

  • dfroedfroe Member, Host Rep

    @vish said:
    They said I was downloading at around 1.4Gbps when the null route kicked in.

    Hm, weird. How could you achieve 1.4 Gbps inbound traffic on a 1 Gbps port with L4 flow-controlled TCP traffic?! Would be interesting if some MRTG graphs existed.

  • dfroe said: How could you achieve 1.4 Gbps inbound traffic

    That's how they trip you, null route you, annoy the client, and so they idle the server, rinse and repeat for profit. /s

  • TheLinuxBugTheLinuxBug Member
    edited April 2019

    dfroe said: now tweaked the TCP window like this:

    net.core.rmem_max=16777216
    net.core.wmem_max=16777216
    net.ipv4.tcp_rmem=4096 87380 16777216
    net.ipv4.tcp_wmem=4096 65536 16777216

    Yes, you actually are on the right track here, it does have to do with these settings.

    You can actually use the settings listed in this post, if you like: https://www.lowendtalk.com/discussion/25317/fixing-network-speed-in-kvm

    I use a variation of this on all my virtual and dedicated servers. For Dacentec it helps a lot with the issues people have reported. The default sysctl settings in their images are not very good for their network.

    Cheers!

    Thanked by 1dfroe
  • dfroedfroe Member, Host Rep
    edited April 2019

    @greattomeetyou said:

    @dfroe said: How could you achieve 1.4 Gbps inbound traffic

    That's how they trip you, null route you, annoy the client, and so they idle the server, rinse and repeat for profit. /s

    Hm. That might work at least until a certain point and for a short time if you have some cheap outsourced supporters idling around. But remember it is a monthly contract. If you permanently annoy your clients, they will simply cancel the contract and move to another provider next month, resulting in zero profit. I don't want to take a specific position here. It just doesn't make sense to me. And yes, I know that not everything necessarily has to make sense. :)

  • ^ Exactly this. I was fine with the performance of the box overall for what I was using it for but getting null routed like 3 times within 2 days and then having to artificially limit my throughput just rubbed me the wrong way. Blessing in disguise though, I got a much better deal elsewhere.

  • MilonMilon Member

    Dacetec request any KYC?

  • MilonMilon Member

    Not possible to edit previous message, but some other (maybe stupid) questions.
    1) /30 IPv4 is it 4 ip? And what does it mean on this page https://billing.dacentec.com/hostbill/index.php?/cart/extras/ "Only 1 usable"? How it's possible that they give range of ips, but you can use only one?
    2) "Rent-to-own" how exactly it works? Please, explain or where can I read about this/

  • hzrhzr Member

    Milon said: 1) /30 IPv4 is it 4 ip? And what does it mean on this page https://billing.dacentec.com/hostbill/index.php?/cart/extras/ "Only 1 usable"? How it's possible that they give range of ips, but you can use only one?

    network gateway broadcast etc

    Milon said: Dacetec request any KYC?

    if you are suspicious yes

    Thanked by 1Milon
  • KuJoeKuJoe Member, Host Rep

    Milon said: 2) "Rent-to-own" how exactly it works? Please, explain or where can I read about this/

    After 12 months you can switch to colocation pricing and for a small fee they can unrack, box, and ship you the server (you provide the shipping label and coordinate pickup of the server).

  • @KuJoe said:

    Milon said: 2) "Rent-to-own" how exactly it works? Please, explain or where can I read about this/

    After 12 months you can switch to colocation pricing and for a small fee they can unrack, box, and ship you the server (you provide the shipping label and coordinate pickup of the server).

    Switch to colo as option or is possible to continue with the service?

    How much colo?

  • MilonMilon Member
    edited May 2019

    @hzr said:

    Milon said: 1) /30 IPv4 is it 4 ip? And what does it mean on this page https://billing.dacentec.com/hostbill/index.php?/cart/extras/ "Only 1 usable"? How it's possible that they give range of ips, but you can use only one?

    network gateway broadcast etc

    Not exactly understand this, but why they wrote /30 if it's really 1 ip available at server?

    And other question:
    Dacetec servers come with KVM? It's it possible to manage it at boot process?

    Thanks.

    @optisoft said:

    Switch to colo as option or is possible to continue with the service?
    How much colo?

    Seems colo month cost is much more then dedi server cost :)

Sign In or Register to comment.