Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


HostSolutions.ro - Get Free 50% Credit until to 20-Mar-2020 - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

HostSolutions.ro - Get Free 50% Credit until to 20-Mar-2020

2»

Comments

  • dfroedfroe Member, Host Rep

    @classical: Ehm, you are referring to what? If you do not want to read old threads, stay here for a few years and you'll probably know. I do not think it is about falling down and standing up in this case.

  • @dfroe said:
    @classical: Ehm, you are referring to what? If you do not want to read old threads, stay here for a few years and you'll probably know. I do not think it is about falling down and standing up in this case.

    Well did he scammed anyone , stealed or killed ??

  • dfroedfroe Member, Host Rep

    @classical said: Well did he scammed anyone , stealed or killed ??

    Who claimed this?

    The summary of this thread is that their network is screwed up (bandwidth below 10 Mbps), they try fixing it for months without providing any real solution to existing customers but they are running promotions with 50% free credits. Some of us hoped they would rather prioritize fixing open issues instead of running promos. That's what I disliked.

    Thanked by 1TimboJones
  • cociucociu Member

    @dfroe your problem should be resolved now.

  • @kennsann said:
    I'd just like to complain as well for my Storage VPS with them, this just happened twice now.

    1. Probably a months ago, I needed to SSH into the server to add some things after it has been running for over two months. Then I can't SSH into my server, I know I didn't not make any mistakes with the password, or whatever protection over SSH. So I restarted it via the client control panel. Nothing happened, and now my web apps were no longer loading. Nothing was really that important for me to cry over, so I just said maybe something was corrupted somewhere, so what I did was to reinstall the OS. And the server died. Literally.

    I opened a ticket for it to be fixed, what they did was not a manual reset of the server I was in, but they moved me to another node! So that probably means they could no longer do anything with the previous server I had.

    1. Fast forward to yesterday. I needed to do the same thing, add some stuff into the server and I went to SSH in, no luck again. Tried to restart in the client panel, still can't SSH into the server and my web apps are all down again. I haven't tried to do an OS reinstall tho, but I think I know where this is going. :/

    Over all, dirt cheap, but I guess you get what you paid for.

    Edit: Update, reinstalled OS. Worked this time. But still frustrating to do this all over again. :/

    Is your storage VPS KVM or LXC? I hear people say their new KVM storage VPS is much more stable.

  • cociucociu Member

    Kiwi83 said: Is your storage VPS KVM or LXC? I hear people say their new KVM storage VPS is much more stable.

    is kvm , we have a verry verry bad experience with lxc so all is kvm now. And yes is much more stable. Also we have change the module and the new one is not generate any problem.

  • dfroedfroe Member, Host Rep

    @cociu said: your problem should be resolved now.

    Thanks for taking care of this, didn't receive any update on the ticket, so wasn't aware of any changes.

    Network performance is a bit better now but still far below average and compared to how it was last year when everything was working fine.

    With one TCP stream I still cannot push more then 10 Mbit/s. At least that doubled compared to 5 Mbit/s before but it is still very very low.

    When increasing the number of TCP streams I can actually get 90 Mbit/s when using 10 simultaneous TCP streams. But the speed of one TCP stream is still limited to 10 Mbit/s.

    Is this limit enforced by you or your upstream? 10 Mbit/s for one stream is IMHO quite low.

    From my experience this bad quality started when you switched routing all outbound traffic from Liberty Global to GTS. Do you maybe see any chance to go back to Liberty Global, at least for a test?

    Last year when everything was fine I had constant 94.4 Mbit/s on my Fast Ethernet link with one single TCP stream.

    I am testing with iperf3 against ping.online.net and bouygues.iperf.fr, results are the same.
    From other servers I can easily send several hundred Mbps towards them, with one single TCP stream.

  • cociucociu Member

    dfroe said: Thanks for taking care of this, didn't receive any update on the ticket, so wasn't aware of any changes.

    Network performance is a bit better now but still far below average and compared to how it was last year when everything was working fine.

    With one TCP stream I still cannot push more then 10 Mbit/s. At least that doubled compared to 5 Mbit/s before but it is still very very low.

    When increasing the number of TCP streams I can actually get 90 Mbit/s when using 10 simultaneous TCP streams. But the speed of one TCP stream is still limited to 10 Mbit/s.

    Is this limit enforced by you or your upstream? 10 Mbit/s for one stream is IMHO quite low.

    From my experience this bad quality started when you switched routing all outbound traffic from Liberty Global to GTS. Do you maybe see any chance to go back to Liberty Global, at least for a test?

    Last year when everything was fine I had constant 94.4 Mbit/s on my Fast Ethernet link with one single TCP stream.

    I am testing with iperf3 against ping.online.net and bouygues.iperf.fr, results are the same.

    From other servers I can easily send several hundred Mbps towards them, with one single TCP stream.

    interesting , i will call gts to see if have any limitation there , liberty global is work fine i can confirm but still dont understand way gts is a mess. Still investigating in this case and i will install a dedi in your rack for testing propose.

  • @cociu said:

    Kiwi83 said: Is your storage VPS KVM or LXC? I hear people say their new KVM storage VPS is much more stable.

    is kvm , we have a verry verry bad experience with lxc so all is kvm now. And yes is much more stable. Also we have change the module and the new one is not generate any problem.

    Didn't you say earlier you planned to move actual lxc to kvm ? What about that ?

  • ClouviderClouvider Member, Patron Provider

    @cociu said:

    dfroe said: Thanks for taking care of this, didn't receive any update on the ticket, so wasn't aware of any changes.

    Network performance is a bit better now but still far below average and compared to how it was last year when everything was working fine.

    With one TCP stream I still cannot push more then 10 Mbit/s. At least that doubled compared to 5 Mbit/s before but it is still very very low.

    When increasing the number of TCP streams I can actually get 90 Mbit/s when using 10 simultaneous TCP streams. But the speed of one TCP stream is still limited to 10 Mbit/s.

    Is this limit enforced by you or your upstream? 10 Mbit/s for one stream is IMHO quite low.

    From my experience this bad quality started when you switched routing all outbound traffic from Liberty Global to GTS. Do you maybe see any chance to go back to Liberty Global, at least for a test?

    Last year when everything was fine I had constant 94.4 Mbit/s on my Fast Ethernet link with one single TCP stream.

    I am testing with iperf3 against ping.online.net and bouygues.iperf.fr, results are the same.

    From other servers I can easily send several hundred Mbps towards them, with one single TCP stream.

    interesting , i will call gts to see if have any limitation there , liberty global is work fine i can confirm but still dont understand way gts is a mess. Still investigating in this case and i will install a dedi in your rack for testing propose.

    If your uplink is multiple 10G to each switch in a LAG and you use cheap switch, you’ll find the problem with algo causing uneven distribution among the links and therefore problems with connections tied to the unbalanced link - with the limited info this sounds like the cause.

  • cociucociu Member

    ben47955 said: Didn't you say earlier you planned to move actual lxc to kvm ? What about that ?

    we have try with some clients and for some reason was denied .. so finnaly decided to wait until expire the service. IF you want please open a tiket and i am glad to migrate. Also do a backup because all your data will be lost.

  • raindog308raindog308 Administrator, Veteran

    classical said: Well did he scammed anyone , stealed or killed ??

    Yes, @cociu is a mass murderer and he's stated it here.

    cociu said: Also you have asked me many time what i am , i am Dracula .

    He has the blood of thousands on his hands and won't stop killing until he's stopped.

    Thanked by 1skorous
  • @Kiwi83 said:

    @kennsann said:
    I'd just like to complain as well for my Storage VPS with them, this just happened twice now.

    1. Probably a months ago, I needed to SSH into the server to add some things after it has been running for over two months. Then I can't SSH into my server, I know I didn't not make any mistakes with the password, or whatever protection over SSH. So I restarted it via the client control panel. Nothing happened, and now my web apps were no longer loading. Nothing was really that important for me to cry over, so I just said maybe something was corrupted somewhere, so what I did was to reinstall the OS. And the server died. Literally.

    I opened a ticket for it to be fixed, what they did was not a manual reset of the server I was in, but they moved me to another node! So that probably means they could no longer do anything with the previous server I had.

    1. Fast forward to yesterday. I needed to do the same thing, add some stuff into the server and I went to SSH in, no luck again. Tried to restart in the client panel, still can't SSH into the server and my web apps are all down again. I haven't tried to do an OS reinstall tho, but I think I know where this is going. :/

    Over all, dirt cheap, but I guess you get what you paid for.

    Edit: Update, reinstalled OS. Worked this time. But still frustrating to do this all over again. :/

    Is your storage VPS KVM or LXC? I hear people say their new KVM storage VPS is much more stable.

    Was kvm. Just frustrating that it dies for some reason. :/

  • TimboJonesTimboJones Member
    edited March 2020

    @dfroe said:

    @cociu said: your problem should be resolved now.

    Thanks for taking care of this, didn't receive any update on the ticket, so wasn't aware of any changes.

    Network performance is a bit better now but still far below average and compared to how it was last year when everything was working fine.

    With one TCP stream I still cannot push more then 10 Mbit/s. At least that doubled compared to 5 Mbit/s before but it is still very very low.

    When increasing the number of TCP streams I can actually get 90 Mbit/s when using 10 simultaneous TCP streams. But the speed of one TCP stream is still limited to 10 Mbit/s.

    Is this limit enforced by you or your upstream? 10 Mbit/s for one stream is IMHO quite low.

    From my experience this bad quality started when you switched routing all outbound traffic from Liberty Global to GTS. Do you maybe see any chance to go back to Liberty Global, at least for a test?

    Last year when everything was fine I had constant 94.4 Mbit/s on my Fast Ethernet link with one single TCP stream.

    I am testing with iperf3 against ping.online.net and bouygues.iperf.fr, results are the same.
    From other servers I can easily send several hundred Mbps towards them, with one single TCP stream.

    That sounds like a latency or packet loss problem. See if you're getting double digit pings just to get out of the network. If that is low, check for packet loss. The 0's in the iperf is packet loss or buffer issue.

    Check if BBR is still enabled in sysctl.

    Lastly, do a UDP test to see how fat the pipe is, that will show the maximum possible throughput regardless of latency or packet loss.

    Thanked by 1dfroe
  • dfroedfroe Member, Host Rep

    @TimboJones these are some good hints further narrowing down the root cause of this issue.

    Using BBR actually achieves a similiar result with one TCP stream like multiple simultaneous tradditional CUBIC streams. With BBR I am surprised to get up to 80 Mbps, i.e. quite near to the max. of 94.4.

    Personally I do not like BBR because of its unfairness (but that's a different story). Assuming there is some congestion, BBR will most likely greedily grab bandwidth from other streams.

    With iperf I can easily meassure packet loss.

    Remaining at 1 Mbps, everything looks fine:

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-1.00   sec   113 KBytes   927 Kbits/sec  0.054 ms  0/80 (0%)
    [  5]   1.00-2.00   sec   122 KBytes   996 Kbits/sec  0.032 ms  0/86 (0%)
    [  5]   2.00-3.00   sec   122 KBytes   996 Kbits/sec  0.126 ms  0/86 (0%)
    [  5]   3.00-4.00   sec   123 KBytes  1.01 Mbits/sec  0.031 ms  0/87 (0%)
    [  5]   4.00-5.00   sec   122 KBytes   996 Kbits/sec  0.039 ms  0/86 (0%)
    [  5]   5.00-6.00   sec   122 KBytes   996 Kbits/sec  0.039 ms  0/86 (0%)
    [  5]   6.00-7.00   sec   123 KBytes  1.01 Mbits/sec  0.066 ms  0/87 (0%)
    [  5]   7.00-8.00   sec   122 KBytes   996 Kbits/sec  0.037 ms  0/86 (0%)
    [  5]   8.00-9.00   sec   122 KBytes   996 Kbits/sec  0.048 ms  0/86 (0%)
    [  5]   9.00-10.00  sec   123 KBytes  1.01 Mbits/sec  0.063 ms  0/87 (0%)
    [  5]  10.00-10.08  sec  9.90 KBytes   988 Kbits/sec  0.066 ms  0/7 (0%)
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-10.08  sec  0.00 Bytes  0.00 bits/sec  0.066 ms  0/864 (0%)
    -----------------------------------------------------------
    

    As soon as I increase bandwidth, I see a light but absolutely constant packet loss.

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-1.00   sec   559 KBytes  4.58 Mbits/sec  0.075 ms  1/396 (0.25%)
    [  5]   1.00-2.00   sec   611 KBytes  5.00 Mbits/sec  0.027 ms  0/432 (0%)
    [  5]   2.00-3.00   sec   609 KBytes  4.99 Mbits/sec  0.083 ms  1/432 (0.23%)
    [  5]   3.00-4.00   sec   609 KBytes  4.99 Mbits/sec  0.047 ms  0/431 (0%)
    [  5]   4.00-5.00   sec   609 KBytes  4.99 Mbits/sec  0.338 ms  1/432 (0.23%)
    [  5]   5.00-6.00   sec   611 KBytes  5.00 Mbits/sec  0.036 ms  0/432 (0%)
    [  5]   6.00-7.00   sec   609 KBytes  4.99 Mbits/sec  0.052 ms  0/431 (0%)
    [  5]   7.00-8.00   sec   611 KBytes  5.00 Mbits/sec  0.045 ms  0/432 (0%)
    [  5]   8.00-9.00   sec   608 KBytes  4.98 Mbits/sec  0.031 ms  1/431 (0.23%)
    [  5]   9.00-10.00  sec   609 KBytes  4.99 Mbits/sec  0.062 ms  1/432 (0.23%)
    [  5]  10.00-10.08  sec  49.5 KBytes  4.94 Mbits/sec  0.057 ms  0/35 (0%)
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-10.08  sec  0.00 Bytes  0.00 bits/sec  0.057 ms  5/4316 (0.12%)
    -----------------------------------------------------------
    

    This packet loss will trigger a reduction of bandwidth in traditional TCP CC as seen in the 0's. Good point.

    Usually packet loss should only occur when exceeding the available bandwidth, i.e. when there are more packets in the transmit queue than capacity on the link, right?
    In this case the packet loss seems "normal" because I can increase bandwidth despite packet loss?!
    When playing around with UDP bandwidth between 5 and 50 Mbps, the packet loss remains more or less constant between 0.05% and 0.1%.

    Is it "normal" to see such a behaviour?

    I'd guess it does not really feel like a congestion as the packet loss remains similiar in the range between 5 and 50 Mbps. And if hammering the pipe with BBR "ignoring" the packet loss (not using it as a trigger to reduce throughput) I can push 80 Mbps.

    I'd also say it's not an indicator for buffer bloat on some switch or router as buffering packets wouldn't lead to packet loss as seen in UDP tests, right?

    I do not have any evidence as I cannot influence the outbound routing between LibertyGlobal as it was before and GTS now. But for me it smells like the new GTS link introduced this light constant packet loss.

    Maybe this hints can help @cociu further troubleshooting and hopefully finally fixing this issues.

  • king8654king8654 Member
    edited March 2020

    For a cheap seedbox, that $53/3yr deal can't be beat. Scp permaseed stuff over there at 20MB/s from hetzner box and just works most of the time.

    For production stuff, your dumb

  • @cociu said:

    BBTN said: Hi, when will payments made on 20/03/2020 be honored towards your promotional offer?

    if you have upload credit and not received the bonus please open a tiket because is something missing here. Thanks.

    Hi can you check my ticket asking for credit its more than a week now @cociu

  • @dfroe said:
    @TimboJones these are some good hints further narrowing down the root cause of this issue.

    Using BBR actually achieves a similiar result with one TCP stream like multiple simultaneous tradditional CUBIC streams. With BBR I am surprised to get up to 80 Mbps, i.e. quite near to the max. of 94.4.

    Personally I do not like BBR because of its unfairness (but that's a different story). Assuming there is some congestion, BBR will most likely greedily grab bandwidth from other streams.

    With iperf I can easily meassure packet loss.

    Remaining at 1 Mbps, everything looks fine:

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-1.00   sec   113 KBytes   927 Kbits/sec  0.054 ms  0/80 (0%)
    [  5]   1.00-2.00   sec   122 KBytes   996 Kbits/sec  0.032 ms  0/86 (0%)
    [  5]   2.00-3.00   sec   122 KBytes   996 Kbits/sec  0.126 ms  0/86 (0%)
    [  5]   3.00-4.00   sec   123 KBytes  1.01 Mbits/sec  0.031 ms  0/87 (0%)
    [  5]   4.00-5.00   sec   122 KBytes   996 Kbits/sec  0.039 ms  0/86 (0%)
    [  5]   5.00-6.00   sec   122 KBytes   996 Kbits/sec  0.039 ms  0/86 (0%)
    [  5]   6.00-7.00   sec   123 KBytes  1.01 Mbits/sec  0.066 ms  0/87 (0%)
    [  5]   7.00-8.00   sec   122 KBytes   996 Kbits/sec  0.037 ms  0/86 (0%)
    [  5]   8.00-9.00   sec   122 KBytes   996 Kbits/sec  0.048 ms  0/86 (0%)
    [  5]   9.00-10.00  sec   123 KBytes  1.01 Mbits/sec  0.063 ms  0/87 (0%)
    [  5]  10.00-10.08  sec  9.90 KBytes   988 Kbits/sec  0.066 ms  0/7 (0%)
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-10.08  sec  0.00 Bytes  0.00 bits/sec  0.066 ms  0/864 (0%)
    -----------------------------------------------------------
    

    As soon as I increase bandwidth, I see a light but absolutely constant packet loss.

    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-1.00   sec   559 KBytes  4.58 Mbits/sec  0.075 ms  1/396 (0.25%)
    [  5]   1.00-2.00   sec   611 KBytes  5.00 Mbits/sec  0.027 ms  0/432 (0%)
    [  5]   2.00-3.00   sec   609 KBytes  4.99 Mbits/sec  0.083 ms  1/432 (0.23%)
    [  5]   3.00-4.00   sec   609 KBytes  4.99 Mbits/sec  0.047 ms  0/431 (0%)
    [  5]   4.00-5.00   sec   609 KBytes  4.99 Mbits/sec  0.338 ms  1/432 (0.23%)
    [  5]   5.00-6.00   sec   611 KBytes  5.00 Mbits/sec  0.036 ms  0/432 (0%)
    [  5]   6.00-7.00   sec   609 KBytes  4.99 Mbits/sec  0.052 ms  0/431 (0%)
    [  5]   7.00-8.00   sec   611 KBytes  5.00 Mbits/sec  0.045 ms  0/432 (0%)
    [  5]   8.00-9.00   sec   608 KBytes  4.98 Mbits/sec  0.031 ms  1/431 (0.23%)
    [  5]   9.00-10.00  sec   609 KBytes  4.99 Mbits/sec  0.062 ms  1/432 (0.23%)
    [  5]  10.00-10.08  sec  49.5 KBytes  4.94 Mbits/sec  0.057 ms  0/35 (0%)
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bandwidth       Jitter    Lost/Total Datagrams
    [  5]   0.00-10.08  sec  0.00 Bytes  0.00 bits/sec  0.057 ms  5/4316 (0.12%)
    -----------------------------------------------------------
    

    This packet loss will trigger a reduction of bandwidth in traditional TCP CC as seen in the 0's. Good point.

    Usually packet loss should only occur when exceeding the available bandwidth, i.e. when there are more packets in the transmit queue than capacity on the link, right?
    In this case the packet loss seems "normal" because I can increase bandwidth despite packet loss?!
    When playing around with UDP bandwidth between 5 and 50 Mbps, the packet loss remains more or less constant between 0.05% and 0.1%.

    Is it "normal" to see such a behaviour?

    I'd guess it does not really feel like a congestion as the packet loss remains similiar in the range between 5 and 50 Mbps. And if hammering the pipe with BBR "ignoring" the packet loss (not using it as a trigger to reduce throughput) I can push 80 Mbps.

    I'd also say it's not an indicator for buffer bloat on some switch or router as buffering packets wouldn't lead to packet loss as seen in UDP tests, right?

    I do not have any evidence as I cannot influence the outbound routing between LibertyGlobal as it was before and GTS now. But for me it smells like the new GTS link introduced this light constant packet loss.

    Maybe this hints can help @cociu further troubleshooting and hopefully finally fixing this issues.

    No, it's not normal to have those packets lost. It doesn't look like an over capacity issue, IMO. Looks similar to a project I worked on with 10GB fiber with physical media errors. I wouldn't be surprised if the datacenter had bad fiber. Wasn't that what turned out to be the fault the last time they got past super bad network issues?

  • dfroedfroe Member, Host Rep
    edited April 2020

    Just for the records and for the sake of transparency: Something seems to have changed today.

    I have not yet received a reply on my tickets so I cannot say whether they finally found and fixed the root cause of the messy network. Their outbound routing is still all GTS but the network quality seems to be back to normal. Let's see if it remains like it is right now. SmokePing should verify this as well in a couple of days. Have been waiting quite a time to see such an output again. :)

    # sysctl net.ipv4.tcp_congestion_control=cubic
    net.ipv4.tcp_congestion_control = cubic
    # iperf3 -4 -t 15 -p $((9200+(RANDOM%23))) -c bouygues.iperf.fr
    Connecting to host bouygues.iperf.fr, port 9202
    [  5] local 89.35.XX.XX port 61092 connected to 89.84.1.222 port 9202
    [ ID] Interval           Transfer     Bitrate         Retr  Cwnd
    [  5]   0.00-1.00   sec  9.67 MBytes  81.1 Mbits/sec    0    857 KBytes
    [  5]   1.00-2.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   2.00-3.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   3.00-4.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   4.00-5.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   5.00-6.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   6.00-7.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   7.00-8.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   8.00-9.00   sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]   9.00-10.00  sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]  10.00-11.00  sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]  11.00-12.00  sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]  12.00-13.00  sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]  13.00-14.00  sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    [  5]  14.00-15.00  sec  11.2 MBytes  94.4 Mbits/sec    0    857 KBytes
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate         Retr
    [  5]   0.00-15.00  sec   167 MBytes  93.5 Mbits/sec    0             sender
    [  5]   0.00-15.07  sec   164 MBytes  91.1 Mbits/sec                  receiver
    iperf Done.
    
  • pikepike Veteran
    edited April 2020

    Renewed my VPS with HS today to find their billing finally automatically detects the paypal payment and marks invoices paid now. Good job @cociu!

    Thanked by 2dfroe skorous
  • dfroedfroe Member, Host Rep
    edited April 2020

    @pike said: [...] to find their billing finally automatically detects the paypal payment and marks invoices paid now.

    Really? That's also a big improvement. During the last years I always had to open a ticket after each paypal payment to have it manually checked and invoice marked as paid. The last time I tried paying directly with credit card through their Mobilpay gateway, but this wasn't a good idea, either. No confirmation at all for six hours (even from Mobilpay) and although I added an amount in EUR, Mobilpay charged my credit card an amount in RON resulting in a slightly higher amount of EUR on my credit card statement.

    So, things moving again into the right direction? :)

  • @pike said:
    Renewed my VPS with HS today to find their billing finally automatically detects the paypal payment and marks invoices paid now. Good job @cociu!

    Oh, I'd bet a box of donuts it was due to @MikePT.

    Thanked by 2raindog308 MikePT
  • SvenSven Member

    I have a storage VPS at HS. I have huge performance issues and the ticket is open since almost 2 month and they can not fix it.

  • cociucociu Member
    edited April 2020

    Sven said: I have a storage VPS at HS. I have huge performance issues and the ticket is open since almost 2 month and they can not fix it.

    we do not have such old tikets ... can you pm please with the tiket number ? if you have replay in the same tiket withowt receive a response (normaly in 24-48 hours) your tiket is go down in the list and is marked like new one ... any way plase pm with the tiket number.

  • SvenSven Member
    edited April 2020

    @cociu said:

    Sven said: I have a storage VPS at HS. I have huge performance issues and the ticket is open since almost 2 month and they can not fix it.

    we do not have such old tikets ... can you pm please with the tiket number ? if you have replay in the same tiket withowt receive a response (normaly in 24-48 hours) your tiket is go down in the list and is marked like new one ... any way plase pm with the tiket number.

    I do get a response. But unfortunately they can not fix the issue (they need to wait for Adrian or Marcus!?). Send you the ticket number.

  • @cociu Any sweet yearly kvm promo, coming this April ?

  • cociucociu Member
    edited April 2020

    rahulks said: @cociu Any sweet yearly kvm promo, coming this April ?

    will be verry difficult to make offers ... We are full for the next month. But never know. Thanks for interest.

  • SvenSven Member
    edited April 2020

    @cociu said:

    rahulks said: @cociu Any sweet yearly kvm promo, coming this April ?

    will be verry difficult to make offers ... We are full for the next month. But never know. Thanks for interest.

    @cociu
    I didnt get any response from you. I sent you a pm. The ticket is also still "open"

  • cociucociu Member

    I didnt get any response from you. I sent you a pm. The ticket is also still "open"

    yes you will have in maximum 24 hours. I am not a teck guy , i just see the tiket and was multiple replays, is true no result until now because is a major issue in this node and will tard weeks to be fixed due of huge tb data. Any way if you are ok you can ask to change the node and will be almost instant.

Sign In or Register to comment.