New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
@classical: Ehm, you are referring to what? If you do not want to read old threads, stay here for a few years and you'll probably know. I do not think it is about falling down and standing up in this case.
Well did he scammed anyone , stealed or killed ??
Who claimed this?
The summary of this thread is that their network is screwed up (bandwidth below 10 Mbps), they try fixing it for months without providing any real solution to existing customers but they are running promotions with 50% free credits. Some of us hoped they would rather prioritize fixing open issues instead of running promos. That's what I disliked.
@dfroe your problem should be resolved now.
Is your storage VPS KVM or LXC? I hear people say their new KVM storage VPS is much more stable.
is kvm , we have a verry verry bad experience with lxc so all is kvm now. And yes is much more stable. Also we have change the module and the new one is not generate any problem.
Thanks for taking care of this, didn't receive any update on the ticket, so wasn't aware of any changes.
Network performance is a bit better now but still far below average and compared to how it was last year when everything was working fine.
With one TCP stream I still cannot push more then 10 Mbit/s. At least that doubled compared to 5 Mbit/s before but it is still very very low.
When increasing the number of TCP streams I can actually get 90 Mbit/s when using 10 simultaneous TCP streams. But the speed of one TCP stream is still limited to 10 Mbit/s.
Is this limit enforced by you or your upstream? 10 Mbit/s for one stream is IMHO quite low.
From my experience this bad quality started when you switched routing all outbound traffic from Liberty Global to GTS. Do you maybe see any chance to go back to Liberty Global, at least for a test?
Last year when everything was fine I had constant 94.4 Mbit/s on my Fast Ethernet link with one single TCP stream.
I am testing with iperf3 against ping.online.net and bouygues.iperf.fr, results are the same.
From other servers I can easily send several hundred Mbps towards them, with one single TCP stream.
From other servers I can easily send several hundred Mbps towards them, with one single TCP stream.
interesting , i will call gts to see if have any limitation there , liberty global is work fine i can confirm but still dont understand way gts is a mess. Still investigating in this case and i will install a dedi in your rack for testing propose.
Didn't you say earlier you planned to move actual lxc to kvm ? What about that ?
If your uplink is multiple 10G to each switch in a LAG and you use cheap switch, you’ll find the problem with algo causing uneven distribution among the links and therefore problems with connections tied to the unbalanced link - with the limited info this sounds like the cause.
we have try with some clients and for some reason was denied .. so finnaly decided to wait until expire the service. IF you want please open a tiket and i am glad to migrate. Also do a backup because all your data will be lost.
Yes, @cociu is a mass murderer and he's stated it here.
He has the blood of thousands on his hands and won't stop killing until he's stopped.
Was kvm. Just frustrating that it dies for some reason.
That sounds like a latency or packet loss problem. See if you're getting double digit pings just to get out of the network. If that is low, check for packet loss. The 0's in the iperf is packet loss or buffer issue.
Check if BBR is still enabled in sysctl.
Lastly, do a UDP test to see how fat the pipe is, that will show the maximum possible throughput regardless of latency or packet loss.
@TimboJones these are some good hints further narrowing down the root cause of this issue.
Using BBR actually achieves a similiar result with one TCP stream like multiple simultaneous tradditional CUBIC streams. With BBR I am surprised to get up to 80 Mbps, i.e. quite near to the max. of 94.4.
Personally I do not like BBR because of its unfairness (but that's a different story). Assuming there is some congestion, BBR will most likely greedily grab bandwidth from other streams.
With iperf I can easily meassure packet loss.
Remaining at 1 Mbps, everything looks fine:
As soon as I increase bandwidth, I see a light but absolutely constant packet loss.
This packet loss will trigger a reduction of bandwidth in traditional TCP CC as seen in the 0's. Good point.
Usually packet loss should only occur when exceeding the available bandwidth, i.e. when there are more packets in the transmit queue than capacity on the link, right?
In this case the packet loss seems "normal" because I can increase bandwidth despite packet loss?!
When playing around with UDP bandwidth between 5 and 50 Mbps, the packet loss remains more or less constant between 0.05% and 0.1%.
Is it "normal" to see such a behaviour?
I'd guess it does not really feel like a congestion as the packet loss remains similiar in the range between 5 and 50 Mbps. And if hammering the pipe with BBR "ignoring" the packet loss (not using it as a trigger to reduce throughput) I can push 80 Mbps.
I'd also say it's not an indicator for buffer bloat on some switch or router as buffering packets wouldn't lead to packet loss as seen in UDP tests, right?
I do not have any evidence as I cannot influence the outbound routing between LibertyGlobal as it was before and GTS now. But for me it smells like the new GTS link introduced this light constant packet loss.
Maybe this hints can help @cociu further troubleshooting and hopefully finally fixing this issues.
For a cheap seedbox, that $53/3yr deal can't be beat. Scp permaseed stuff over there at 20MB/s from hetzner box and just works most of the time.
For production stuff, your dumb
Hi can you check my ticket asking for credit its more than a week now @cociu
No, it's not normal to have those packets lost. It doesn't look like an over capacity issue, IMO. Looks similar to a project I worked on with 10GB fiber with physical media errors. I wouldn't be surprised if the datacenter had bad fiber. Wasn't that what turned out to be the fault the last time they got past super bad network issues?
Just for the records and for the sake of transparency: Something seems to have changed today.
I have not yet received a reply on my tickets so I cannot say whether they finally found and fixed the root cause of the messy network. Their outbound routing is still all GTS but the network quality seems to be back to normal. Let's see if it remains like it is right now. SmokePing should verify this as well in a couple of days. Have been waiting quite a time to see such an output again.
Renewed my VPS with HS today to find their billing finally automatically detects the paypal payment and marks invoices paid now. Good job @cociu!
Really? That's also a big improvement. During the last years I always had to open a ticket after each paypal payment to have it manually checked and invoice marked as paid. The last time I tried paying directly with credit card through their Mobilpay gateway, but this wasn't a good idea, either. No confirmation at all for six hours (even from Mobilpay) and although I added an amount in EUR, Mobilpay charged my credit card an amount in RON resulting in a slightly higher amount of EUR on my credit card statement.
So, things moving again into the right direction?
Oh, I'd bet a box of donuts it was due to @MikePT.
I have a storage VPS at HS. I have huge performance issues and the ticket is open since almost 2 month and they can not fix it.
we do not have such old tikets ... can you pm please with the tiket number ? if you have replay in the same tiket withowt receive a response (normaly in 24-48 hours) your tiket is go down in the list and is marked like new one ... any way plase pm with the tiket number.
I do get a response. But unfortunately they can not fix the issue (they need to wait for Adrian or Marcus!?). Send you the ticket number.
@cociu Any sweet yearly kvm promo, coming this April ?
will be verry difficult to make offers ... We are full for the next month. But never know. Thanks for interest.
@cociu
I didnt get any response from you. I sent you a pm. The ticket is also still "open"
I didnt get any response from you. I sent you a pm. The ticket is also still "open"
yes you will have in maximum 24 hours. I am not a teck guy , i just see the tiket and was multiple replays, is true no result until now because is a major issue in this node and will tard weeks to be fixed due of huge tb data. Any way if you are ok you can ask to change the node and will be almost instant.