Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Hetzner 10 Gigabit
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Hetzner 10 Gigabit

AdvinAdvin Member, Patron Provider

Hello,

I'm working on getting an AX101 server in Finland and I currently have an SX63 server.

What I want to do is to be able to push full 10 Gbps from SX63 over to AX101 without being charged for the bandwidth. Both servers are in the same datacenter. Is this possible?

I saw that for both servers I can get a 10G NIC for an extra 8 euro per month. This is fine. I can also get a 10 Gigabit switch for an extra 43 euro per month. This is also fine.

So, if I were to get a 10G NIC for both servers and a 10 Gigabit switch with Hetzner, am I able to push 10 Gigabit from SX63 to AX101 without paying for extra bandwidth?

Also, I'm a little confused on the "LAN connection 10G" upgrade. What is it for? Is it necessary? Can I just use a "LAN connection 10G" instead of a switch?

https://docs.hetzner.com/robot/dedicated-server/general-information/root-server-hardware/

Thanks!

Thanked by 20x65 bigua
«1

Comments

  • Let's ping @Hetzner_OL

    The LAN option appears to be a connection between servers in the same rack: https://docs.hetzner.com/robot/dedicated-server/faq/faq/

    In the same page it says:

    How can I access my other servers at Hetzner?
    The servers can communicate with each other via their public IP addresses. Traffic is internal and is thus free.

    Best to send a ticket to get confirmation in writing.

  • AdvinAdvin Member, Patron Provider

    @CyberneticTitan said:
    Let's ping @Hetzner_OL

    The LAN option appears to be a connection between servers in the same rack: https://docs.hetzner.com/robot/dedicated-server/faq/faq/

    In the same page it says:

    How can I access my other servers at Hetzner?
    The servers can communicate with each other via their public IP addresses. Traffic is internal and is thus free.

    Best to send a ticket to get confirmation in writing.

    Do you know if the switch has to be in the same rack? I'd assume so? It might be hard to get an SX63 and an AX101 in the same rack since both have very unexpected deployment times.

  • You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

  • You can setup a vSwitch between multiple servers too, which will allow you to create a private vLAN with internal IP addresses, meaning you can get around any issues with firewalls too - I do this when I upgrade servers with them to be able to sync data between the two servers before I drop the old server

  • LeviLevi Member

    @KaraokeStu said:
    You can setup a vSwitch between multiple servers too, which will allow you to create a private vLAN with internal IP addresses, meaning you can get around any issues with firewalls too - I do this when I upgrade servers with them to be able to sync data between the two servers before I drop the old server

    Always complement your suvgestion with proper url. https://docs.hetzner.com/robot/dedicated-server/network/vswitch/

  • AdvinAdvin Member, Patron Provider
    edited April 2021

    @Kaffekopp said:
    You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    Perhaps. Maybe if I was running in RAID10 I might be able to use over 1 Gbps.

  • @Advin said:

    @Kaffekopp said:
    You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    Perhaps. Maybe if I was running in RAID10 I might be able to use over 1 Gbps.

    And how so ?

  • @Advin said:

    @Kaffekopp said:
    You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    Perhaps. Maybe if I was running in RAID10 I might be able to use over 1 Gbps.

    No, you would not.

  • pierrepierre Member
    edited April 2021

    A Buddy of mine has some Hetzner colos (don't do it, terrible service), and has an SX server for backups. He doesn't pay for local bandwidth, might just be because they're in the same data center or something, but I doubt you'll get charged.

    It just matters how long it takes to transfer the data which might be an issue. I'd go with opening a ticket and just simply asking them directly.

    But regarding 10gig transfers, it should theoretically work as you are getting 10gig to the internet, unless they do something different for their local network.

  • @momkin said:
    And how so ?

    @Kaffekopp said:
    No, you would not.

    I don't know about you guys but it's super easy to saturate a Gbit pipe with spinning rust doing backups, especially when they're high density (16TB in this case) and in RAID 10.

  • @CyberneticTitan said:

    @momkin said:
    And how so ?

    @Kaffekopp said:
    No, you would not.

    I don't know about you guys but it's super easy to saturate a Gbit pipe with spinning rust doing backups, especially when they're high density (16TB in this case) and in RAID 10.

    They're talking about a 10Gbit link.

  • @CyberneticTitan said:

    @momkin said:
    And how so ?

    @Kaffekopp said:
    No, you would not.

    I don't know about you guys but it's super easy to saturate a Gbit pipe with spinning rust doing backups, especially when they're high density (16TB in this case) and in RAID 10.

    10gbit is around 1200MBPS. How Are you going to read or write that speed to 4 hdds. Not possible. Also hetzner does not allow you to add ssds to sx63, so 10gbit is worthless on that server.

    At best, maybe you can do 3gbit or around there with those rust slabs running inn raid-10

  • @Detruire said:
    They're talking about a 10Gbit link.

    Maybe it was a typo but Advin was talking over 1 Gbit.

    @Kaffekopp said:
    10gbit is around 1200MBPS. How Are you going to read or write that speed to 4 hdds. Not possible. Also hetzner does not allow you to add ssds to sx63, so 10gbit is worthless on that server.

    At best, maybe you can do 3gbit or around there with those rust slabs running inn raid-10

    Sure, but at Hetzner it's 1 Gbit, dual 1 Gbit, or 10 Gbit.

    I would disagree saying 10 Gbit is worthless on a HDD only server. Depends on your use case.

  • @CyberneticTitan said:

    @Detruire said:
    They're talking about a 10Gbit link.

    Maybe it was a typo but Advin was talking over 1 Gbit.

    @Kaffekopp said:
    10gbit is around 1200MBPS. How Are you going to read or write that speed to 4 hdds. Not possible. Also hetzner does not allow you to add ssds to sx63, so 10gbit is worthless on that server.

    At best, maybe you can do 3gbit or around there with those rust slabs running inn raid-10

    Sure, but at Hetzner it's 1 Gbit, dual 1 Gbit, or 10 Gbit.

    I would disagree saying 10 Gbit is worthless on a HDD only server. Depends on your use case.

    Your speed Will be limited to your disk speed anyway. How is 10gbit useful when you never Will be able to read or write data with that speed ?

  • FalzoFalzo Member

    @Kaffekopp said:

    Your speed Will be limited to your disk speed anyway. How is 10gbit useful when you never Will be able to read or write data with that speed ?

    there could be use cases where there are a lot of operations going on, apart from accessing files but just using cache or even happening completely in memory ;-)

    so if you not only consider constant stream it's highly likely to burst to 10Gbit easily with certain workloads I'd say.

  • As i know, if you want to connect two server physically, they have to be in the same dc or rack.
    Hetzner move your server for you if needed, but they charge for it.

  • CristianDCristianD Member, Host Rep

    Try another provider, dont go with Hetzner, we had a server from them to keep incremental back-ups for a private server, after 6mo of using the server we get noticed by them, and the server is complet private we keep only some backups on it.

    Thank you for having chosen Hetzner Online GmbH as your webhosting
    partner. We appreciate the trust you have placed in us.

    Unfortunately, there are a number of issues with your account. This
    includes, but is not limited to, abuse complaints for your servers, and
    poor reputation of your IPs.

    As a result of this, your account K1XXXXXXX and all services you have
    with us are going to be cancelled.

    Thanked by 1JasonM
  • @Kaffekopp said:

    Your speed Will be limited to your disk speed anyway. How is 10gbit useful when you never Will be able to read or write data with that speed ?

    As others have mentioned, a 1Gbit (~100MB/sec) link is extremely easy to saturate with RAID 10 storage, and 10Gbit is usually the next available increment for network speed.

    My systems have no trouble with sequential reading or writing at 300 MB/sec -- and that's on MD RAID and not using the newest drives available. So I can easily understand demand for 10Gbit network ports.

  • WilliamWilliam Member
    edited April 2021

    @Kaffekopp said: You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    ah, the expert here has never heard of RAM or ran any storage, i see.
    Please shut the fuck up if you have no idea.

    Older Youtube caches happily do 2x10G from "spinning rust" with few SSDs as cache and 64GB RAM.

    A single HDD will burst seq read (AND WRITE) above 1Gbit.

    A RAID1 allows double the seq read, thus 2Gbit+

    A Ramdisk on DDR4 allows 30-45Gbyte/s

    A GPU on PCIe x16 with GDDR5 will do 300Gbyte/s

    A GPU on PCIe x16 with HBM (Fury, Quadro, VII) will do 900Gbyte/s (HBM2: 1500Gbyte/s)

    Thanked by 2Wolveix cybertech
  • ZerpyZerpy Member

    @William said:

    A GPU on PCIe x16 with GDDR5 will do 300Gbyte/s

    A GPU on PCIe x16 with HBM (Fury, Quadro, VII) will do 900Gbyte/s (HBM2: 1500Gbyte/s)

    16x PCIe gen 4 peaks out at 32 gigabyte/s, not 300. Even PCIe gen 6 will "only" be 128 gigabyte/s.

    I'm not sure how you'd push more than 32 gigabytes each way on the GPU through the motherboard when the maximum link speed of PCIe doesn't allow it.

    Still more than 10gbps obviously ^_^

  • WilliamWilliam Member
    edited April 2021

    @Zerpy said: 16x PCIe gen 4 peaks out at 32 gigabyte/s, not 300. Even PCIe gen 6 will "only" be 128 gigabyte/s.

    Generally right - BUT You forget that PCIe switches do not cross the PCH/CPU and that NVLink also exists.

  • KaffekoppKaffekopp Barred
    edited April 2021

    @William said:

    @Kaffekopp said: You wont be able to do full 10gbit anyway due to spinning rust disks in your sx63

    ah, the expert here has never heard of RAM or ran any storage, i see.
    Please shut the fuck up if you have no idea.

    Older Youtube caches happily do 2x10G from "spinning rust" with few SSDs as cache and 64GB RAM.

    A single HDD will burst seq read (AND WRITE) above 1Gbit.

    A RAID1 allows double the seq read, thus 2Gbit+

    A Ramdisk on DDR4 allows 30-45Gbyte/s

    A GPU on PCIe x16 with GDDR5 will do 300Gbyte/s

    A GPU on PCIe x16 with HBM (Fury, Quadro, VII) will do 900Gbyte/s (HBM2: 1500Gbyte/s)

    The topic is about a sx63, and you cannot add disks to that server. I know about all the things you mention, and i still stand by what i Said. You can do 3, maybe 4gbit on the hdds depending om how you set them up. Ofc you can run a ramdisk, but for large transfers you Will be bottlenecked anyway.

    Please read the topic before you launch a attitude like that on people trying to help

  • WilliamWilliam Member
    edited April 2021

    @Kaffekopp said: I know about all the things you mention, and i still stand by what i Said

    No, you don't. You are the guy that said a single HDD cannot do Gbit. I have proven you wrong as have others.

  • @Kaffekopp said: Ofc you can run a ramdisk, but for large transfers you Will be bottlenecked anyway.

    64GB RAM in a SX63 (and don't tell me, how Hetzner works, i and my ex where the largest SX121 customers for years) - 4GB for OS, 60GB for ramdisk as ZFS read cache.

    Bottleneck is where?

  • @William said:

    @Kaffekopp said: I know about all the things you mention, and i still stand by what i Said

    No, you don't. You are the guy that said a single HDD cannot do Gbit. I have proven you wrong as have others.

    I never said a single HDD cannot do 1gbit. I Said 4 hdds cant do full 10gbit. Get urself some coffee and relax the attitude

  • KaffekoppKaffekopp Barred
    edited April 2021

    @William said:

    @Kaffekopp said: Ofc you can run a ramdisk, but for large transfers you Will be bottlenecked anyway.

    64GB RAM in a SX63 (and don't tell me, how Hetzner works, i and my ex where the largest SX121 customers for years) - 4GB for OS, 60GB for ramdisk as ZFS read cache.

    Bottleneck is where?

    Bottleneck comes once ur cache is full. Also he wants to read at 10gbit, not write. Cache will help, but once cache is transferred you Are again limited to the hdds. Cache is also only useful for reads if data is already there.

    Som IF the Maximum 60gb of data you want to pull is already loaded om cache, yes you can do 10gbit or more. But when you Are running 4x16tb disks, i assume you Will use more than 60gb. Som again, limited to hdds.

    I dont care who you are or what servers you run. You cannot do 10gbit reads on 4 hdds if the transfer is lagrer than the cache, and the data is already in it

  • @Kaffekopp said: No, you would not.

    Here you state a RAID10 will not even do 1Gbit.

    Get your memory right.

  • KaffekoppKaffekopp Barred
    edited April 2021

    @William said:

    @Kaffekopp said: No, you would not.

    Here you state a RAID10 will not even do 1Gbit.

    Get your memory right.

    I quoted over 1gbit. Yes you can do over 1gbit here. But not 10gbit, atleast not for large transfers or if data is not already in cache. Now please stop the attitude Mr smartypants

  • @Kaffekopp said: Bottleneck comes once ur cache is full. Also he wants to read at 10gbit, not write. Cache will help, but once cache is transferred you Are again limited to the hdds. Cache is also only useful for reads if data is already there.

    You obviously fill the cache while it is read. We did WAY more than 10G with a single HDD (2TB, 7200rpm enterprise WD) and 2 SSDs (128GB SATA 6Gbit, OCZ) before. 26Gbit on a 40G port with FDC/Cogent in Zlin.

    Most use cases, in mine it is streaming to N endusers, do not require random read and are very predictable.

    Unlike you i was and am running such services.

  • KaffekoppKaffekopp Barred
    edited April 2021

    @William said:

    @Kaffekopp said: Bottleneck comes once ur cache is full. Also he wants to read at 10gbit, not write. Cache will help, but once cache is transferred you Are again limited to the hdds. Cache is also only useful for reads if data is already there.

    You obviously fill the cache while it is read. We did WAY more than 10G with a single HDD (2TB, 7200rpm enterprise WD) and 2 SSDs (128GB SATA 6Gbit, OCZ) before. 26Gbit on a 40G port with FDC/Cogent in Zlin.

    Most use cases, in mine it is streaming to N endusers, do not require random read and are very predictable.

    Unlike you i was and am running such services.

    Not even going to argue with you any more, not worth the time to argue with a brick wall. You also have no idea what i run or what experience i have, so please stop. You are not the top of the world

    You cannot do full 10gbit as op asked for on this server, atleast not for large transfers. You can do it for a short while as long as the data you pull first is already in cache.

    Im done. Not dealing with Mr imbetterthanyoubecauseirunsxservers anymore.

Sign In or Register to comment.