Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Hetzner 10 Gigabit
New on LowEndTalk? Please Register and read our Community Rules.

Hetzner 10 Gigabit

AdvinAdvin Member

Hello,

I'm working on getting an AX101 server in Finland and I currently have an SX63 server.

What I want to do is to be able to push full 10 Gbps from SX63 over to AX101 without being charged for the bandwidth. Both servers are in the same datacenter. Is this possible?

I saw that for both servers I can get a 10G NIC for an extra 8 euro per month. This is fine. I can also get a 10 Gigabit switch for an extra 43 euro per month. This is also fine.

So, if I were to get a 10G NIC for both servers and a 10 Gigabit switch with Hetzner, am I able to push 10 Gigabit from SX63 to AX101 without paying for extra bandwidth?

Also, I'm a little confused on the "LAN connection 10G" upgrade. What is it for? Is it necessary? Can I just use a "LAN connection 10G" instead of a switch?

https://docs.hetzner.com/robot/dedicated-server/general-information/root-server-hardware/

Thanks!

Thanked by 20x65 bigua

Comments

  • Let's ping @Hetzner_OL

    The LAN option appears to be a connection between servers in the same rack: https://docs.hetzner.com/robot/dedicated-server/faq/faq/

    In the same page it says:

    How can I access my other servers at Hetzner?
    The servers can communicate with each other via their public IP addresses. Traffic is internal and is thus free.

    Best to send a ticket to get confirmation in writing.

  • AdvinAdvin Member

    @CyberneticTitan said:
    Let's ping @Hetzner_OL

    The LAN option appears to be a connection between servers in the same rack: https://docs.hetzner.com/robot/dedicated-server/faq/faq/

    In the same page it says:

    How can I access my other servers at Hetzner?
    The servers can communicate with each other via their public IP addresses. Traffic is internal and is thus free.

    Best to send a ticket to get confirmation in writing.

    Do you know if the switch has to be in the same rack? I'd assume so? It might be hard to get an SX63 and an AX101 in the same rack since both have very unexpected deployment times.

  • KaffekoppKaffekopp Member, No Sales

    You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

  • You can setup a vSwitch between multiple servers too, which will allow you to create a private vLAN with internal IP addresses, meaning you can get around any issues with firewalls too - I do this when I upgrade servers with them to be able to sync data between the two servers before I drop the old server

  • LTnigerLTniger Member

    @KaraokeStu said:
    You can setup a vSwitch between multiple servers too, which will allow you to create a private vLAN with internal IP addresses, meaning you can get around any issues with firewalls too - I do this when I upgrade servers with them to be able to sync data between the two servers before I drop the old server

    Always complement your suvgestion with proper url. https://docs.hetzner.com/robot/dedicated-server/network/vswitch/

    #!/Bashblog.net | Free Wordpress Hosting | If you can't idle, what's the point?

  • AdvinAdvin Member
    edited April 27

    @Kaffekopp said:
    You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    Perhaps. Maybe if I was running in RAID10 I might be able to use over 1 Gbps.

  • momkinmomkin Member

    @Advin said:

    @Kaffekopp said:
    You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    Perhaps. Maybe if I was running in RAID10 I might be able to use over 1 Gbps.

    And how so ?

  • KaffekoppKaffekopp Member, No Sales

    @Advin said:

    @Kaffekopp said:
    You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    Perhaps. Maybe if I was running in RAID10 I might be able to use over 1 Gbps.

    No, you would not.

  • pierrepierre Member
    edited April 27

    A Buddy of mine has some Hetzner colos (don't do it, terrible service), and has an SX server for backups. He doesn't pay for local bandwidth, might just be because they're in the same data center or something, but I doubt you'll get charged.

    It just matters how long it takes to transfer the data which might be an issue. I'd go with opening a ticket and just simply asking them directly.

    But regarding 10gig transfers, it should theoretically work as you are getting 10gig to the internet, unless they do something different for their local network.

  • @momkin said:
    And how so ?

    @Kaffekopp said:
    No, you would not.

    I don't know about you guys but it's super easy to saturate a Gbit pipe with spinning rust doing backups, especially when they're high density (16TB in this case) and in RAID 10.

  • @CyberneticTitan said:

    @momkin said:
    And how so ?

    @Kaffekopp said:
    No, you would not.

    I don't know about you guys but it's super easy to saturate a Gbit pipe with spinning rust doing backups, especially when they're high density (16TB in this case) and in RAID 10.

    They're talking about a 10Gbit link.

  • KaffekoppKaffekopp Member, No Sales

    @CyberneticTitan said:

    @momkin said:
    And how so ?

    @Kaffekopp said:
    No, you would not.

    I don't know about you guys but it's super easy to saturate a Gbit pipe with spinning rust doing backups, especially when they're high density (16TB in this case) and in RAID 10.

    10gbit is around 1200MBPS. How Are you going to read or write that speed to 4 hdds. Not possible. Also hetzner does not allow you to add ssds to sx63, so 10gbit is worthless on that server.

    At best, maybe you can do 3gbit or around there with those rust slabs running inn raid-10

  • @Detruire said:
    They're talking about a 10Gbit link.

    Maybe it was a typo but Advin was talking over 1 Gbit.

    @Kaffekopp said:
    10gbit is around 1200MBPS. How Are you going to read or write that speed to 4 hdds. Not possible. Also hetzner does not allow you to add ssds to sx63, so 10gbit is worthless on that server.

    At best, maybe you can do 3gbit or around there with those rust slabs running inn raid-10

    Sure, but at Hetzner it's 1 Gbit, dual 1 Gbit, or 10 Gbit.

    I would disagree saying 10 Gbit is worthless on a HDD only server. Depends on your use case.

  • KaffekoppKaffekopp Member, No Sales

    @CyberneticTitan said:

    @Detruire said:
    They're talking about a 10Gbit link.

    Maybe it was a typo but Advin was talking over 1 Gbit.

    @Kaffekopp said:
    10gbit is around 1200MBPS. How Are you going to read or write that speed to 4 hdds. Not possible. Also hetzner does not allow you to add ssds to sx63, so 10gbit is worthless on that server.

    At best, maybe you can do 3gbit or around there with those rust slabs running inn raid-10

    Sure, but at Hetzner it's 1 Gbit, dual 1 Gbit, or 10 Gbit.

    I would disagree saying 10 Gbit is worthless on a HDD only server. Depends on your use case.

    Your speed Will be limited to your disk speed anyway. How is 10gbit useful when you never Will be able to read or write data with that speed ?

  • FalzoFalzo Member

    @Kaffekopp said:

    Your speed Will be limited to your disk speed anyway. How is 10gbit useful when you never Will be able to read or write data with that speed ?

    there could be use cases where there are a lot of operations going on, apart from accessing files but just using cache or even happening completely in memory ;-)

    so if you not only consider constant stream it's highly likely to burst to 10Gbit easily with certain workloads I'd say.

    UltraVPS.eu KVM in US/UK/NL/DE: 15% off first 6 month | Netcup VPS/rootDS - 5€ off: 36nc15279180197 (ref)

  • graphicgraphic Member

    As i know, if you want to connect two server physically, they have to be in the same dc or rack.
    Hetzner move your server for you if needed, but they charge for it.

    Two things are infinite: the universe and Hetzner; and I'm not sure about the universe.

    -Albert Einstein

  • Try another provider, dont go with Hetzner, we had a server from them to keep incremental back-ups for a private server, after 6mo of using the server we get noticed by them, and the server is complet private we keep only some backups on it.

    Thank you for having chosen Hetzner Online GmbH as your webhosting
    partner. We appreciate the trust you have placed in us.

    Unfortunately, there are a number of issues with your account. This
    includes, but is not limited to, abuse complaints for your servers, and
    poor reputation of your IPs.

    As a result of this, your account K1XXXXXXX and all services you have
    with us are going to be cancelled.

  • aj_potcaj_potc Member

    @Kaffekopp said:

    Your speed Will be limited to your disk speed anyway. How is 10gbit useful when you never Will be able to read or write data with that speed ?

    As others have mentioned, a 1Gbit (~100MB/sec) link is extremely easy to saturate with RAID 10 storage, and 10Gbit is usually the next available increment for network speed.

    My systems have no trouble with sequential reading or writing at 300 MB/sec -- and that's on MD RAID and not using the newest drives available. So I can easily understand demand for 10Gbit network ports.

  • WilliamWilliam Member, Provider
    edited April 28

    @Kaffekopp said: You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    ah, the expert here has never heard of RAM or ran any storage, i see.
    Please shut the fuck up if you have no idea.

    Older Youtube caches happily do 2x10G from "spinning rust" with few SSDs as cache and 64GB RAM.

    A single HDD will burst seq read (AND WRITE) above 1Gbit.

    A RAID1 allows double the seq read, thus 2Gbit+

    A Ramdisk on DDR4 allows 30-45Gbyte/s

    A GPU on PCIe x16 with GDDR5 will do 300Gbyte/s

    A GPU on PCIe x16 with HBM (Fury, Quadro, VII) will do 900Gbyte/s (HBM2: 1500Gbyte/s)

    Thanked by 1Wolveix
  • ZerpyZerpy Member

    @William said:

    A GPU on PCIe x16 with GDDR5 will do 300Gbyte/s

    A GPU on PCIe x16 with HBM (Fury, Quadro, VII) will do 900Gbyte/s (HBM2: 1500Gbyte/s)

    16x PCIe gen 4 peaks out at 32 gigabyte/s, not 300. Even PCIe gen 6 will "only" be 128 gigabyte/s.

    I'm not sure how you'd push more than 32 gigabytes each way on the GPU through the motherboard when the maximum link speed of PCIe doesn't allow it.

    Still more than 10gbps obviously ^_^

  • WilliamWilliam Member, Provider
    edited April 28

    @Zerpy said: 16x PCIe gen 4 peaks out at 32 gigabyte/s, not 300. Even PCIe gen 6 will "only" be 128 gigabyte/s.

    Generally right - BUT You forget that PCIe switches do not cross the PCH/CPU and that NVLink also exists.

  • KaffekoppKaffekopp Member, No Sales
    edited April 28

    @William said:

    @Kaffekopp said: You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    ah, the expert here has never heard of RAM or ran any storage, i see.
    Please shut the fuck up if you have no idea.

    Older Youtube caches happily do 2x10G from "spinning rust" with few SSDs as cache and 64GB RAM.

    A single HDD will burst seq read (AND WRITE) above 1Gbit.

    A RAID1 allows double the seq read, thus 2Gbit+

    A Ramdisk on DDR4 allows 30-45Gbyte/s

    A GPU on PCIe x16 with GDDR5 will do 300Gbyte/s

    A GPU on PCIe x16 with HBM (Fury, Quadro, VII) will do 900Gbyte/s (HBM2: 1500Gbyte/s)

    The topic is about a sx63, and you cannot add disks to that server. I know about all the things you mention, and i still stand by what i Said. You can do 3, maybe 4gbit on the hdds depending om how you set them up. Ofc you can run a ramdisk, but for large transfers you Will be bottlenecked anyway.

    Please read the topic before you launch a attitude like that on people trying to help

  • WilliamWilliam Member, Provider
    edited April 28

    @Kaffekopp said: I know about all the things you mention, and i still stand by what i Said

    No, you don't. You are the guy that said a single HDD cannot do Gbit. I have proven you wrong as have others.

  • WilliamWilliam Member, Provider

    @Kaffekopp said: Ofc you can run a ramdisk, but for large transfers you Will be bottlenecked anyway.

    64GB RAM in a SX63 (and don't tell me, how Hetzner works, i and my ex where the largest SX121 customers for years) - 4GB for OS, 60GB for ramdisk as ZFS read cache.

    Bottleneck is where?

  • KaffekoppKaffekopp Member, No Sales

    @William said:

    @Kaffekopp said: I know about all the things you mention, and i still stand by what i Said

    No, you don't. You are the guy that said a single HDD cannot do Gbit. I have proven you wrong as have others.

    I never said a single HDD cannot do 1gbit. I Said 4 hdds cant do full 10gbit. Get urself some coffee and relax the attitude

  • KaffekoppKaffekopp Member, No Sales
    edited April 28

    @William said:

    @Kaffekopp said: Ofc you can run a ramdisk, but for large transfers you Will be bottlenecked anyway.

    64GB RAM in a SX63 (and don't tell me, how Hetzner works, i and my ex where the largest SX121 customers for years) - 4GB for OS, 60GB for ramdisk as ZFS read cache.

    Bottleneck is where?

    Bottleneck comes once ur cache is full. Also he wants to read at 10gbit, not write. Cache will help, but once cache is transferred you Are again limited to the hdds. Cache is also only useful for reads if data is already there.

    Som IF the Maximum 60gb of data you want to pull is already loaded om cache, yes you can do 10gbit or more. But when you Are running 4x16tb disks, i assume you Will use more than 60gb. Som again, limited to hdds.

    I dont care who you are or what servers you run. You cannot do 10gbit reads on 4 hdds if the transfer is lagrer than the cache, and the data is already in it

  • WilliamWilliam Member, Provider

    @Kaffekopp said: No, you would not.

    Here you state a RAID10 will not even do 1Gbit.

    Get your memory right.

  • KaffekoppKaffekopp Member, No Sales
    edited April 28

    @William said:

    @Kaffekopp said: No, you would not.

    Here you state a RAID10 will not even do 1Gbit.

    Get your memory right.

    I quoted over 1gbit. Yes you can do over 1gbit here. But not 10gbit, atleast not for large transfers or if data is not already in cache. Now please stop the attitude Mr smartypants

  • WilliamWilliam Member, Provider

    @Kaffekopp said: Bottleneck comes once ur cache is full. Also he wants to read at 10gbit, not write. Cache will help, but once cache is transferred you Are again limited to the hdds. Cache is also only useful for reads if data is already there.

    You obviously fill the cache while it is read. We did WAY more than 10G with a single HDD (2TB, 7200rpm enterprise WD) and 2 SSDs (128GB SATA 6Gbit, OCZ) before. 26Gbit on a 40G port with FDC/Cogent in Zlin.

    Most use cases, in mine it is streaming to N endusers, do not require random read and are very predictable.

    Unlike you i was and am running such services.

  • KaffekoppKaffekopp Member, No Sales
    edited April 28

    @William said:

    @Kaffekopp said: Bottleneck comes once ur cache is full. Also he wants to read at 10gbit, not write. Cache will help, but once cache is transferred you Are again limited to the hdds. Cache is also only useful for reads if data is already there.

    You obviously fill the cache while it is read. We did WAY more than 10G with a single HDD (2TB, 7200rpm enterprise WD) and 2 SSDs (128GB SATA 6Gbit, OCZ) before. 26Gbit on a 40G port with FDC/Cogent in Zlin.

    Most use cases, in mine it is streaming to N endusers, do not require random read and are very predictable.

    Unlike you i was and am running such services.

    Not even going to argue with you any more, not worth the time to argue with a brick wall. You also have no idea what i run or what experience i have, so please stop. You are not the top of the world

    You cannot do full 10gbit as op asked for on this server, atleast not for large transfers. You can do it for a short while as long as the data you pull first is already in cache.

    Im done. Not dealing with Mr imbetterthanyoubecauseirunsxservers anymore.

  • Tr33nTr33n Member
    edited April 28

    @Kaffekopp

    Of course it is possible to reach 10 Gbit network utilization, even with only one HDD. It just depends on what the server is doing. What if he use RAMFS? What if he don't need the disk I/O at all. This is purely dependent on the application.

    Don't make any statements if you don't even know what the guy wants to do with the server. Maybe he wants the 10 Gbit uplink to UDP flood his own servers, where he clearly doesn't depend on the disk I/O. You have no clue.

    Sorry, your stupidity upsets me.

    Thanked by 1NetDynamics24
  • KaffekoppKaffekopp Member, No Sales

    @Tr33n said:
    @Kaffekopp

    Of course it is possible to reach 10 Gbit network utilization, even with only one HDD. It just depends on what the server is doing. What if he use RAMFS? What if he don't need the disk I/O at all. This is purely dependent on the application.

    Don't make any statements if you don't even know what the guy wants to do with the server. Maybe he wants the 10 Gbit uplink to UDP flood his own servers, where he clearly doesn't depend on the disk I/O. You have no clue.

    Sorry, your stupidity upsets me.

    Im out. If you read what i say, im trying to say that large file transfers will not sustain 10gbit over longer time if the data is not in cache. If the purpose of a 4x16tb server is not serving files, then you can use ram or whatever so you dont depend on the disks.

    Poeple here cant read. Im sorry.

  • Tr33nTr33n Member
    edited April 28

    You are the one who started the nonsensical discussion without knowing any facts. I quote:

    @Kaffekopp said: You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    Here you clearly said that he can't reach 10 Gbit network traffic with his server. And that is wrong.

    You should have said that if he wants to read/write data from the disks, he can't take full advantage of the 10 Gbit, but even that is only true to a limited extent, as recent used files are may stored in the cache.

    Well, even with the "spinning rust disks" he could reach 10 Gbit. E.g. by using the RAM as a caching device. Of course, this should only be done if the data are unimportant, but you can't rule such setups out if you don't know exactly what the guy wants to do.

    You just talked shit and won't even acknowledge it. Shame on you.

  • aj_potcaj_potc Member

    @Kaffekopp said:

    Poeple here cant read. Im sorry.

    Yes we can, and that's why your comment triggered so many of us.

    You claimed that there's no need for a 10Gbit port because the storage on these systems can't saturate it. Even if that were true, it's a silly statement.

    It doesn't matter whether a 10Gbit transfer rate can be achieved over a long period. The point is that 1Gbit is not nearly enough, and 10Gbit is the next logical upgrade.

Sign In or Register to comment.