Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Virtualizor bug or kernel bug
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Virtualizor bug or kernel bug

AlbaHostAlbaHost Member, Host Rep
edited May 2015 in General

Hello,

We are having trouble with new created VPS, first we thought that the problem is with ip routing in our node in Albania, and after we contacted virtualizor support they state that this is a ISP problem or a kernel bug (xen) which it will be solved by next release of virtualizor ( today's release 2.7.3). After that we are still facing same issue, now we have tested in our node in France (OVH) the high ms ping and slow speed as in our Albanian node.
Is someone facing this issue who use virtualizor too?

Proofs of OVH node:

Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from OVH SAS (176.31.59.87)... Selecting best server based on latency... Hosted by MEDIACTIVE NETWORK (Paris) [1.59 km]: 2519.075 ms Testing download speed........................................ Download: 11.74 Mbit/s Testing upload speed.................................................. Upload: 3.07 Mbit/s Share results: https://www.speedtest.net/result/4378587271.png

Proofs of Albanian node:

Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from Keminet Ltd. (31.171.155.5)... Selecting best server based on latency... Hosted by Shadownet (Ura Vajgurore) [33.62 km]: 1722.631 ms Testing download speed........................................ Download: 10.17 Mbit/s Testing upload speed.................................................. Upload: 2.87 Mbit/s Share results: https://www.speedtest.net/result/4378603855.png

Cheers

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    which network driver are you using, how much ram does the vps have, which xen bug did they say it is and how different is the result directly from the node?

  • AlbaHostAlbaHost Member, Host Rep
    edited May 2015

    @AnthonySmith said:
    which network driver are you using, how much ram does the vps have, which xen bug did they say it is and how different is the result directly from the node?

    We tried both Realtek and E1000, same results no different. The vps have both 2GB ram, they did not decide which xen bug. But we have downgrade the kernel and no changes.
    If you test from the node hereby are the results from the node:

    Albanian node:

    Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from Keminet Ltd. (31.171.155.*)... Selecting best server based on latency... Hosted by TRING-COMMUNICATIONS (Tirana) [39.67 km]: 3.009 ms Testing download speed........................................ Download: 49.12 Mbit/s Testing upload speed.................................................. Upload: 49.93 Mbit/s Share results: https://www.speedtest.net/result/4379023188.png

    And hereby from OVH node:

    Retrieving speedtest.net configuration... Retrieving speedtest.net server list... Testing from OVH SAS (37.187.161.1**)... Selecting best server based on latency... Hosted by FreeMobile (Paris) [1.59 km]: 13.732 ms Testing download speed........................................ Download: 817.24 Mbit/s Testing upload speed.................................................. Upload: 186.09 Mbit/s Share results: https://www.speedtest.net/result/4378999904.png

    It's strange, because if you reinstall to centos 5* you will get great results in both nodes, if you go with newer OS like centos 6* or Ubuntu 14* etc you will get high ms ping and damn slow speed...

  • AnthonySmithAnthonySmith Member, Patron Provider

    I assume it is Xen PV then, are you using pygrub?

    Please give me the output of the following from the domU:

    cat /proc/sys/net/core/rmem_default
    cat /proc/sys/net/core/rmem_max
    cat /proc/sys/net/core/wmem_default
    cat /proc/sys/net/core/wmem_max
    cat /proc/sys/net/ipv4/tcp_sack
    cat /proc/sys/net/ipv4/tcp_window_scaling
    

    However as I type this I notice you seems to have MASSIVE latency on the guest/domU tests compared to the hons/dom0 which is obviously the real problem.

    Please ping the dom0 from the domU and then ping the gateway from the domU, may be an arp or duplicate mac address or badly setup bridge?

    the problem is not the speed that is the sympton, the real problem is the 2000ms -/+ latency

    From your domU

    AlbaHost said: Hosted by MEDIACTIVE NETWORK (Paris) [1.59 km]: 2519.075 ms

    AlbaHost said: Hosted by Shadownet (Ura Vajgurore) [33.62 km]: 1722.631 ms

    From your dom0

    AlbaHost said: Hosted by TRING-COMMUNICATIONS (Tirana) [39.67 km]: 3.009 ms

    AlbaHost said: Hosted by FreeMobile (Paris) [1.59 km]: 13.732 ms

  • AnthonySmithAnthonySmith Member, Patron Provider

    You really need to use the same servers for both tests though or the results are pretty pointless.

  • AlbaHostAlbaHost Member, Host Rep

    @AnthonySmith said:
    I assume it is Xen PV then, are you using pygrub?

    Please give me the output of the following from the domU:

    > cat /proc/sys/net/core/rmem_default
    > cat /proc/sys/net/core/rmem_max
    > cat /proc/sys/net/core/wmem_default
    > cat /proc/sys/net/core/wmem_max
    > cat /proc/sys/net/ipv4/tcp_sack
    > cat /proc/sys/net/ipv4/tcp_window_scaling
    > 

    However as I type this I notice you seems to have MASSIVE latency on the guest/domU tests compared to the hons/dom0 which is obviously the real problem.

    Please ping the dom0 from the domU and then ping the gateway from the domU, may be an arp or duplicate mac address or badly setup bridge?

    the problem is not the speed that is the sympton, the real problem is the 2000ms -/+ latency

    From your domU

    Hereby are the output from albanian vps:

    [root@testvps ~]# cat /proc/sys/net/core/rmem_default 229376 [root@testvps ~]# cat /proc/sys/net/core/rmem_max 229376 [root@testvps ~]# cat /proc/sys/net/core/wmem_default 229376 [root@testvps ~]# cat /proc/sys/net/core/wmem_max 229376 [root@testvps ~]# cat /proc/sys/net/ipv4/tcp_sack 1 [root@testvps ~]# cat /proc/sys/net/ipv4/tcp_window_scaling 1

    And hereby from ovh vps:

    [root@testvps2 ~]# cat /proc/sys/net/core/rmem_default 229376 [root@testvps2 ~]# cat /proc/sys/net/core/rmem_max 229376 [root@testvps2 ~]# cat /proc/sys/net/core/wmem_default 229376 [root@testvps2 ~]# cat /proc/sys/net/core/wmem_max 229376 [root@testvps2 ~]# cat /proc/sys/net/ipv4/tcp_sack 1 [root@testvps2 ~]# cat /proc/sys/net/ipv4/tcp_window_scaling 1

    dom0 from domU OVH:

    64 bytes from ns335***.ip-37-187-161.eu (37.187.161.***): icmp_seq=1 ttl=63 time=0.210 ms 64 bytes from ns335***.ip-37-187-161.eu (37.187.161.***): icmp_seq=2 ttl=63 time=0.181 ms 64 bytes from ns335***.ip-37-187-161.eu (37.187.161.***): icmp_seq=3 ttl=63 time=0.179 ms 64 bytes from ns335***.ip-37-187-161.eu (37.187.161.***): icmp_seq=4 ttl=63 time=0.183 ms

    domU gateway:

    PING 37.187.161.254 (37.187.161.254) 56(84) bytes of data. 64 bytes from 37.187.161.254: icmp_seq=1 ttl=255 time=0.601 ms 64 bytes from 37.187.161.254: icmp_seq=2 ttl=255 time=1.97 ms 64 bytes from 37.187.161.254: icmp_seq=3 ttl=255 time=0.623 ms 64 bytes from 37.187.161.254: icmp_seq=4 ttl=255 time=0.640 ms 64 bytes from 37.187.161.254: icmp_seq=5 ttl=255 time=0.622 ms 64 bytes from 37.187.161.254: icmp_seq=6 ttl=255 time=0.576 ms

    dom0 from domU Albanian vps:

    PING 31.171.155.** (31.171.155.**) 56(84) bytes of data. 64 bytes from 31.171.155.**: icmp_seq=1 ttl=64 time=0.983 ms 64 bytes from 31.171.155.**: icmp_seq=2 ttl=64 time=0.110 ms 64 bytes from 31.171.155.**: icmp_seq=3 ttl=64 time=0.085 ms 64 bytes from 31.171.155.**: icmp_seq=4 ttl=64 time=0.058 ms

    domU gateway:

    PING 31.171.155.* (31.171.155.*) 56(84) bytes of data. 64 bytes from 31.171.155.*: icmp_seq=1 ttl=64 time=0.201 ms 64 bytes from 31.171.155.*: icmp_seq=2 ttl=64 time=0.201 ms 64 bytes from 31.171.155.*: icmp_seq=3 ttl=64 time=0.203 ms 64 bytes from 31.171.155.*: icmp_seq=4 ttl=64 time=0.157 ms 64 bytes from 31.171.155.*: icmp_seq=5 ttl=64 time=0.289 ms 64 bytes from 31.171.155.*: icmp_seq=6 ttl=64 time=0.178 ms

  • AnthonySmithAnthonySmith Member, Patron Provider

    Ok so nothing to be alarmed about there.

    The latency is is obviously not happening on the server, can you run an mtr from the VPS to 8.8.8.8 ?

    mtr -s 50 --report 8.8.8.8

    also have you tried from other sources not just speedtest.net?

  • AlbaHostAlbaHost Member, Host Rep

    @AnthonySmith said:
    Ok so nothing to be alarmed about there.

    The latency is is obviously not happening on the server, can you run an mtr from the VPS to 8.8.8.8 ?

    mtr -s 50 --report 8.8.8.8

    also have you tried from other sources not just speedtest.net?

    OVH:

    HOST: testvps2 Loss% Snt Last Avg Best Wrst StDev 1. 37.187.161.253 0.0% 10 0.6 1.1 0.6 2.1 0.6 2. sbg-g2-a9.fr.eu 0.0% 10 1.2 0.9 0.6 1.2 0.2 3. gsw-g1-a9.fr.eu 0.0% 10 11.1 11.3 11.0 12.5 0.4 4. ??? 100.0 10 0.0 0.0 0.0 0.0 0.0 5. ??? 100.0 10 0.0 0.0 0.0 0.0 0.0 6. 209.85.250.208 0.0% 10 11.6 11.6 11.5 11.7 0.1 7. google-public-dns-a.google.c 0.0% 10 11.4 11.5 11.4 11.5 0.0

    Albania:

    HOST: testvps Loss% Snt Last Avg Best Wrst StDev 1. 31.171.155.1 0.0% 10 0.2 0.2 0.1 0.2 0.0 2. 185.18.40.137 0.0% 10 5.6 6.5 0.4 10.6 3.4 3. 213.163.120.9 0.0% 10 4.9 5.2 4.9 7.0 0.6 4. r1fra2.core.init7.net 0.0% 10 36.1 39.5 36.1 49.0 5.6 5. de-cix10.net.google.com 0.0% 10 37.0 36.7 36.4 37.0 0.2 6. 216.239.47.241 0.0% 10 37.2 37.2 37.1 37.3 0.1 7. 209.85.246.189 0.0% 10 37.9 37.9 37.8 38.0 0.0 8. google-public-dns-a.google.c 0.0% 10 37.3 37.3 37.2 37.5 0.1

    I don't think that there is a problem with speedtest source, tried speedof.me but same problem...

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited May 2015

    Well I don't know it seems you only get the high latency problem from those 2 locations when using the speedtest script, sadly you used 2 different servers when testing the host node so the results are pretty useless.

    Do you get the same speed issues if you just wget the test file from softlayer Amsterdam?

    could you also run an mtr to paris.speedtest.mediactive-network.net

    Either way this is not a virtualizor bug by the way and I don't see how this could be a kernel bug, are you using pygrub with Xen?

  • AlbaHostAlbaHost Member, Host Rep

    @AnthonySmith said:
    Well I don't know it seems you only get the high latency problem from those 2 locations when using the speedtest script, sadly you used 2 different servers when testing the host node so the results are pretty useless.

    Do you get the same speed issues if you just wget the test file from softlayer Amsterdam?

    could you also run an mtr to paris.speedtest.mediactive-network.net

    Either way this is not a virtualizor bug by the way and I don't see how this could be a kernel bug, are you using pygrub with Xen?

    Yes, we use pygrub.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited May 2015

    AlbaHost said: Yes, we use pygrub.

    Then its not a kernel bug you are just using a native kernel in the VPS.

    Please check the speed tests again from the same location as you used to test the host node and run an mtr to paris.speedtest.mediactive-network.net from the VPS as well as:

    wget http://speedtest.ams01.softlayer.com/downloads/test100.zip from the vps

    The only other possibilities I can think of is you have a bridge config issue, did you install libvirt by any chance? or if your trying to use a routed setup and perhaps the DC does not support this.

    Beyond this I don't really know what to suggest without direct access it is hard to guess as you have not given any troubleshooting steps you have gone through yourself apart from asking virtualizor then providing speedtest results from 2 completely different servers.

  • AlbaHostAlbaHost Member, Host Rep

    @AnthonySmith said:
    Beyond this I don't really know what to suggest without direct access it is hard to guess as you have not given any troubleshooting steps you have gone through yourself apart from asking virtualizor then providing speedtest results from 2 completely different servers.

    Thank you for your help, we have installed virtualizor which installs all xen resource/things from itself. So nothing were installed manually/separated. Regarding the wget test file:

    OVH:

    2015-05-22 21:17:21 (72.1 MB/s) - test100.zip saved [104874307/104874307]

    Another one:

    2015-05-22 17:18:36 (4.10 MB/s) - test100.zip saved [104874307/104874307]

    We never had problems with this earlier, at least not in OVH node which we have more than 2-3 years ago...

  • AnthonySmithAnthonySmith Member, Patron Provider

    Yes it may install Xen for you however virtualizor do not maintain xen nor are they responsible for it.

    Anyway you seem to have only responded to 1 small part of my whole suggestion, I am trying to help you but you are making it hard to do so when you only provide 1/10th of the requested information.

  • AlbaHostAlbaHost Member, Host Rep
    edited May 2015

    @AnthonySmith said:
    Yes it may install Xen for you however virtualizor do not maintain xen nor are they responsible for it.

    Anyway you seem to have only responded to 1 small part of my whole suggestion, I am trying to help you but you are making it hard to do so when you only provide 1/10th of the requested information.

    Apologise, hereby are the mtr:

    ovh:

    HOST: testvps2 Loss% Snt Last Avg Best Wrst StDev 1. 37.187.161.253 0.0% 10 2.1 1.0 0.7 2.1 0.5 2. sbg-g2-a9.fr.eu 0.0% 10 0.7 0.8 0.6 1.0 0.1 3. gsw-g1-a9.fr.eu 0.0% 10 11.2 11.4 11.1 12.5 0.6 4. ??? 100.0 10 0.0 0.0 0.0 0.0 0.0 5. mediactive-network.franceix. 0.0% 10 10.7 10.6 10.5 10.8 0.1 6. cdnf02.cdn.mediactive-networ 0.0% 10 10.5 10.5 10.4 10.6 0.0

    Another one:

    HOST: testvps Loss% Snt Last Avg Best Wrst StDev 1. 31.171.155.1 0.0% 10 0.2 0.2 0.1 0.2 0.0 2. 185.18.40.137 0.0% 10 0.4 1.1 0.3 3.6 1.2 3. 213.163.120.9 0.0% 10 5.1 5.6 5.0 9.9 1.5 4. r1fra2.core.init7.net 0.0% 10 36.1 39.6 36.1 48.4 3.7 5. r1ams2.core.init7.net 0.0% 10 51.2 53.2 46.8 57.3 3.1 6. neotelecoms.eunetworks.nl-ix 0.0% 10 47.1 47.0 47.0 47.1 0.0 7. ae1.tcr1.tc2.ams.core.as8218 0.0% 10 47.0 53.0 46.9 86.7 13.5 8. ae0.tcr2.rb.par.core.as8218. 0.0% 10 61.4 61.8 61.3 65.4 1.2 9. et-8-0-0.tcr2.th2.par.core.a 0.0% 10 53.7 53.7 53.6 53.9 0.1 10. mediactive-gw5.tcr2.th2.par. 0.0% 10 53.6 53.6 53.5 53.7 0.0 11. cdnf02.cdn.mediactive-networ 0.0% 10 53.5 53.5 53.5 53.6 0.0

    Softlayer Amsterdam:

    ovh:

    2015-05-22 21:37:23 (92.8 MB/s) - /dev/null saved [524288000/524288000]

    Other

    2015-05-22 17:39:32 (5.42 MB/s) - /dev/null saved [524288000/524288000]

  • AnthonySmithAnthonySmith Member, Patron Provider

    image

    And the rest please.

  • AnthonySmithAnthonySmith Member, Patron Provider
    edited May 2015

    also open /etc/sysctl.conf

    add:

    net.core.rmem_max=16777216
    net.core.wmem_max=16777216
    net.ipv4.tcp_rmem=4096 87380 16777216                                          
    net.ipv4.tcp_wmem=4096 65536 16777216
    

    and run: sysctl -p

    then try the speed test again.

  • AlbaHostAlbaHost Member, Host Rep

    @AnthonySmith said:
    also open /etc/sysctl.conf

    add:

    > net.core.rmem_max=16777216
    > net.core.wmem_max=16777216
    > net.ipv4.tcp_rmem=4096 87380 16777216                                          
    > net.ipv4.tcp_wmem=4096 65536 16777216
    > 

    and run: sysctl -p

    then try the speed test again.

    Thank you, same results as before...

  • AnthonySmithAnthonySmith Member, Patron Provider

    bridged or routed setup?

    try installing an httpd on the VPS and then wget the test file from the VPS on to the host node that should give an indication as to the source of the issue.

  • Shoaib_AShoaib_A Member
    edited May 2015

    @AnthonySmith said:
    bridged or routed setup?

    His nodes are at OVH so it must be a routed setup.

  • @fametel said:

    You still use a bridged setup for XEN and OVH. same for HVM and PV.

  • Shoaib_AShoaib_A Member
    edited May 2015

    @clamhost said:
    You still use a bridged setup for XEN and OVH. same for HVM and PV.

    I think you have to use routed configuration for XEN on OVH using Virtualizor which is what OP uses as well.

    http://www.virtualizor.com/wiki/Setup_OVH

  • AlbaHostAlbaHost Member, Host Rep

    @fametel said:

    That's correct.

Sign In or Register to comment.