Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Dacentec mini review a couple of weeks in
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Dacentec mini review a couple of weeks in

I got one of those 8TB servers for $25. Hard drives definitely have some hours on them. I had an issue with the second drive on the server but support was super helpful and understanding and helped troubleshoot and even swapped the drive twice until they concluded the cable was bad and changed it, that fixed the problem. The network is pretty good and it appears to be quite reliable but I think their "DDoS" policy is a bit too strict. They are super quick in null routing your IP. It's happened to me twice when downloading data on the server (not an actual DDoS). I'm trying to tune the DL rate to stay below whatever their automated system considers an attack but I haven't found the sweet spot. Keep that in mind however, as getting null routed all the time while having critical systems running is no fun. I'm not running anything critical on there.

Overall I'm satisfied. The price point for the amount of storage is quite a good deal, just remember you'll probably have to Throttle your ingress if you don't want to trigger their null routing system from hell.

Comments

  • feezioxiiifeezioxiii Member, Host Rep

    Another positive vouch for @Dacentec :)

  • Positive? You have to throttle a transfer to avoid an automated nullroute

  • I have pushed the ingress almost to the max for a few days when I was filling up the drives but never had an issue with nullrouting. However, Dacentec support is super awesome. You can probably raise a ticket to discuss about the null routing triggers. From my experience, they are quite knowledgeable and willing to work with you to fix problems. These old hardware have some quirks of their own but I have never received a canned reply to any of my tickets yet.

  • I'm not exactly sure what started triggering it because I was able to hit around 500 Mbps without issue the first week. I think the system started picking me up when I added a second set of downloads coming from a different IP at the same time. Like I said, I'm tweaking it and I don't have critical systems in the server so I can deal with growing pains. I still believe the support and quality of the network are worth it.

  • @hacktek said:
    I'm not exactly sure what started triggering it because I was able to hit around 500 Mbps without issue the first week. I think the system started picking me up when I added a second set of downloads coming from a different IP at the same time. Like I said, I'm tweaking it and I don't have critical systems in the server so I can deal with growing pains. I still believe the support and quality of the network are worth it.

    Are you sure its not the pps (packets per second) instead of the transfer rate that is triggering things? I have used things full throttle without much issue, but if you start doing a large amount of packets per second of UDP or TCP (small packets) it is very possible you could be triggering something that way? I would review your interface a bit more using something like iptraf and check on pps and ask support what rates of pps will trigger things and if they have a way/ are willing to change that limit for you.

    my 2 cents.

    Cheers!

    Thanked by 1vimalware
  • @TheLinuxBug said:

    @hacktek said:
    I'm not exactly sure what started triggering it because I was able to hit around 500 Mbps without issue the first week. I think the system started picking me up when I added a second set of downloads coming from a different IP at the same time. Like I said, I'm tweaking it and I don't have critical systems in the server so I can deal with growing pains. I still believe the support and quality of the network are worth it.

    Are you sure its not the pps (packets per second) instead of the transfer rate that is triggering things? I have used things full throttle without much issue, but if you start doing a large amount of packets per second of UDP or TCP (small packets) it is very possible you could be triggering something that way? I would review your interface a bit more using something like iptraf and check on pps and ask support what rates of pps will trigger things and if they have a way/ are willing to change that limit for you.

    my 2 cents.

    Cheers!

    Thanks for the advice, I'll take a closer look at the interface.

  • feezioxiiifeezioxiii Member, Host Rep

    @doughmanes said:
    Positive? You have to throttle a transfer to avoid an automated nullroute

    Beside that lol

    Have you talked with them about this @hacktek? I believe it will be resolved though.

  • @TheLinuxBug said:
    Are you sure its not the pps (packets per second) instead of the transfer rate that is triggering things?

    I would be willing to bet that this is exactly the problem. In the past I had services with Dacentec, and I experienced their null routing while using rclone to move some data to a cloud storage provider. The problem, according to them, was not related to the bandwidth consumed, but rather a high packets-per-second value.

    Unfortunately, I never truly solved the problem. I limited my bandwidth to reduce the PPS, but never figured out why so many packets were being transferred. When I tried using rclone on systems at other datacenters, I was seeing FAR lower PPS values for the same bandwidth - 10x to 50x less.

    My theory (totally untested) is that the network adapter in my Dacentec server, and possibly also the Linux drivers for it, were not fully optimized and couldn't transfer the data as efficiently as more modern hardware. Therefore, my server was using more packets to move the same amount of data when compared to systems in other datacenters.

    I never dug into this in more detail, but perhaps it could be improved by updating your network drivers.

  • How many network maintenance emails are you receiving per month? More than one a week?

  • So far I've gotten only one for maintenance this past Friday lasting about 5 or 6 hours

  • hacktekhacktek Member
    edited March 2018

    @aj_potc said:

    @TheLinuxBug said:
    Are you sure its not the pps (packets per second) instead of the transfer rate that is triggering things?

    I would be willing to bet that this is exactly the problem. In the past I had services with Dacentec, and I experienced their null routing while using rclone to move some data to a cloud storage provider. The problem, according to them, was not related to the bandwidth consumed, but rather a high packets-per-second value.

    Unfortunately, I never truly solved the problem. I limited my bandwidth to reduce the PPS, but never figured out why so many packets were being transferred. When I tried using rclone on systems at other datacenters, I was seeing FAR lower PPS values for the same bandwidth - 10x to 50x less.

    My theory (totally untested) is that the network adapter in my Dacentec server, and possibly also the Linux drivers for it, were not fully optimized and couldn't transfer the data as efficiently as more modern hardware. Therefore, my server was using more packets to move the same amount of data when compared to systems in other datacenters.

    I never dug into this in more detail, but perhaps it could be improved by updating your network drivers.

    You're probably right. Old hardware = new problems :) I think their protections kick in at around 50k PPS. In my totally unscientific tests I've seen the null routing whenever I download around 500 to 600 Mbps. In testing on this server, 100 Mbps = around 12k PPS, 350 Mbps = around, 30k PPS so 500-600 probably puts it over 45-50k.

  • @hacktek said:
    You're probably right. Old hardware = new problems :) I think their protections kick in at around 50k PPS. In my totally unscientific tests I've seen the null routing whenever I download around 500 to 600 Mbps. In testing on this server, 100 Mbps = around 12k PPS, 350 Mbps = around, 30k PPS so 500-600 probably puts it over 45-50k.

    Yes, my numbers were roughly consistent with that. I found that by limiting my transfers to 40 MB/sec (approx. 300 Mbit/sec), I didn't hear anything from them.

    When I let my transfers run without any limitation, Dacentec told me I was occasionally hitting 90k PPS, which they considered as abuse because of the load placed on their switches. I suppose I was peaking around 800-900 Mbit/sec. At that rate, they shut me down very quickly -- within a few minutes of starting the transfer.

    I had no idea what was going on until I checked the PPS rate using iptraf and did a little research to figure out what "normal" packet rates should be given the usual size of IP packets at different transfer rates. I couldn't find any reason why the network stack would be sending packets with such a tiny payload. In any case, I wasn't interested in messing around with changing kernel parameters or exploring alternate network drivers, so I didn't pursue the issue further.

    Since then, I haven't seen such a high packet rate on any VPS or dedicated server I've used. So that's why I generally chalk it up to "old hardware". :-)

  • That's pretty close to what I'm doing. About 330 Mbps is what I set it as and it seems to be flying below their limit.

    Anyway, glad I'm not the only one. Since I'm not running anything critical here I can live with throttled ingress, at this price the server is still quite a good deal, throttled or not.

  • @hacktek said:
    It's happened to me twice when downloading data on the server (not an actual DDoS). I'm trying to tune the DL rate to stay below whatever their automated system considers an attack but I haven't found the sweet spot. Keep that in mind however, as getting null routed all the time while having critical systems running is no fun. I'm not running anything critical on there.

    The same for me by vpsrus (dacentec dc) Are you downloading with more connections?
    I figured out that if you're using more connections e.g. 10 then the IP gets nullrouted.

  • I was but I also tried lower amounts and checked the amount of PPS using iptraf and it didn't make much of a difference unfortunately.

Sign In or Register to comment.