New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
@Kaffekopp
Of course it is possible to reach 10 Gbit network utilization, even with only one HDD. It just depends on what the server is doing. What if he use RAMFS? What if he don't need the disk I/O at all. This is purely dependent on the application.
Don't make any statements if you don't even know what the guy wants to do with the server. Maybe he wants the 10 Gbit uplink to UDP flood his own servers, where he clearly doesn't depend on the disk I/O. You have no clue.
Sorry, your stupidity upsets me.
Im out. If you read what i say, im trying to say that large file transfers will not sustain 10gbit over longer time if the data is not in cache. If the purpose of a 4x16tb server is not serving files, then you can use ram or whatever so you dont depend on the disks.
Poeple here cant read. Im sorry.
You are the one who started the nonsensical discussion without knowing any facts. I quote:
Here you clearly said that he can't reach 10 Gbit network traffic with his server. And that is wrong.
You should have said that if he wants to read/write data from the disks, he can't take full advantage of the 10 Gbit, but even that is only true to a limited extent, as recent used files are may stored in the cache.
Well, even with the "spinning rust disks" he could reach 10 Gbit. E.g. by using the RAM as a caching device. Of course, this should only be done if the data are unimportant, but you can't rule such setups out if you don't know exactly what the guy wants to do.
You just talked shit and won't even acknowledge it. Shame on you.
Yes we can, and that's why your comment triggered so many of us.
You claimed that there's no need for a 10Gbit port because the storage on these systems can't saturate it. Even if that were true, it's a silly statement.
It doesn't matter whether a 10Gbit transfer rate can be achieved over a long period. The point is that 1Gbit is not nearly enough, and 10Gbit is the next logical upgrade.
sorry... do they need to be on the same rack?
I had 10+ servers with Hetzner and all of them are connected to 10G switch and able to push full 10Gbits among them and traffic between them is free.
But you have to ensure all of them are in the same rack, else they will charge you for transfer. Also if you want to add more servers in future you can also reserve a rack spot with them.
I have 10G Hetzner port upgrade on 9900K server with nvme and its capped to roughly 3Gbps according to YABS and speedtest results.
So its basically a scam, or overloaded rack. I have not bothered to make a ticket about it yet. Im migrating stuff to OVH slowly right now anyway.
They billed me extra 43€ just for moving the server to a 10G port rack.
Hetzner is not exactly known for having a stellar network. I mean its good for what you pay but if you're looking to max out the 10gig public port, you definitely want to be with a different host. AFAIK, Hetzner does not physically limit the port or oversells their racks tremendously, they just lack the network peering.
Not sure where this notion comes from, Hetzner has a great network and I never had any issues maxing out the 10G port when I had a server with them.
I am not myself a technician, so I checked with one of the teammates from Networking. He wrote: "WE don't cap network speeds. But we only guarantee the speeds within the internal network. So if the speed is not 10 Gbps to a target outside the Hetzner Network, we can't do much. Please run a test against http://speed.hetzner.de/10GB.bin with multiple network threads. And if the speed is not satisfactory, you can open a ticket." If you have any related questions, please contact the Networking Team. --Katie
Speedtest won't give you an accurate result -- I'd suggest iperf3 or aria2.
E.g.
aria2c -x 8 https://speed.hetzner.de/10GB.bin
(1:252)# aria2c -x 8 https://speed.hetzner.de/10GB.bin
08/25 12:37:16 [NOTICE] Downloading 1 item(s)
[#dadd87 9.6GiB/9.7GiB(98%) CN:5 DL:534MiB]
08/25 12:37:35 [NOTICE] Download complete: /root/10GB.bin
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
dadd87|OK | 523MiB/s|/root/10GB.bin
Status Legend:
(OK):download completed.
In short, the rack is just overloaded as fuck and i'm lucky to get even the half port speeds. Right now its off the peak hours.. but during evenings im lucky to reach 3Gbps, and weekends not even that.
Not worth paying 50€/m extra with VAT.
This is not about overload, it's about peering
Locally it will push the full 10 Gbit pipe easily.
How is it peering if im using internal speedtest source inside their internal network.
aria2c -x 8 https://speed.hetzner.de/10GB.bin
I ran this same internal speedtest 2 weeks ago, on friday night and got 280 MiB/s
You are just full of shit, and this place is full of Hetzner fanboys that cant criticize anything lmao
No wonder the provider just devolves each year further. The latest being the new IP setup fees with some mid management dude asspulling the price chart.
My point is, not worth paying 50€ (VAT included) monthly for the upgrade if there is no guarantee for the bandwidth and they place you in overloaded rack. You might get full 10G or then you might get what I posted above.
Calm down and learn a little bit about how things work.
Fan? Definitely not. Each provider has cons and pros.
And you should stop blaming.
Test bin file has also limited speed and you're not the only one who using that link to test it out
So what is your point @CalmDown?
First you mention peering issue, now you mention that test server overloaded?
@chocolateshirt Peering referrers on iperf test. That was my idea.