Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Crazy Deals on Storage , SSD and hosting with SPICY GIVEAWAY This Black Friday from ServaRICA - Page 7
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Crazy Deals on Storage , SSD and hosting with SPICY GIVEAWAY This Black Friday from ServaRICA

145791015

Comments

  • FalzoFalzo Member
    edited November 2020

    @sportingdan said:

    I went ahead and opened a ticket about it and they have been very forthcoming and offered to reprovision on another node or refund. they also expect things to get better, once most people have migrated and/or they can find and handle more abusing neighbours.

    because I also like that location and would rather want to keep it I am also giving the 'benefit of the doubt' as you call it and let them reprovision.
    new box might be on one of the upcoming stack or whatever, so more likely not filled yet. however the numbers are much better and constant esp. from fio but also during transfer.

    it looks like they limit single network connection to a max of ~50Mbit, as I achieve constant ~6MB/s which definitely is not the filesystem, because a second (or third) transfer in parallel easily reaches the same ~6MB/s

    I think that is a smart move to calm down on the situation and balance performance across users. something I can easily live with, if it stays that way - my use case are pushing in backups anyway ;-)

    Thanked by 1sportingdan
  • TimboJonesTimboJones Member
    edited November 2020

    @Falzo said:
    it looks like they limit single network connection to a max of ~50Mbit

    No. This is across the country, topping out at gigabit rates. Oh wait, did you select the 100Mbps unlimited option instead of Gigabit for 4TB then 10Mbps throttled?

       Speedtest by Ookla
    
         Server: Shaw Communications - Vancouver, BC (id = 4243)
            ISP: Rica Web Services
        Latency:    68.75 ms   (0.07 ms jitter)
       Download:   972.54 Mbps (data used: 1.2 GB)                               
         Upload:   973.01 Mbps (data used: 1.1 GB)                               
    Packet Loss: Not available.
     Result URL: https://www.speedtest.net/result/c/7338244f-2e49-4033-9148-1934475e1052
    [root@mon tmp]# speedtest --accept-license -s 3049
    
       Speedtest by Ookla
    
         Server: TELUS - Vancouver, BC (id = 3049)
            ISP: Rica Web Services
        Latency:    64.95 ms   (0.15 ms jitter)
       Download:   978.85 Mbps (data used: 1.5 GB)                               
         Upload:  1001.31 Mbps (data used: 1.3 GB)                               
    Packet Loss: Not available.
     Result URL: https://www.speedtest.net/result/c/d4f38ffc-6cca-475e-96b1-3ec6fb9f8c94
    [root@mon tmp]# speedtest --accept-license -s 10395
    
       Speedtest by Ookla
    
         Server: Speedtest.net - Seattle, WA (id = 10395)
            ISP: Rica Web Services
        Latency:    64.73 ms   (0.09 ms jitter)
       Download:   977.92 Mbps (data used: 1.4 GB)                               
         Upload:   976.23 Mbps (data used: 1.1 GB)                               
    Packet Loss:     0.0%
     Result URL: https://www.speedtest.net/result/c/7546425e-0164-476a-9c1a-9399de651c35
    
  • @sportingdan said:
    when and where did you see me asking for SSD speeds?

    Because you were being unreasonable. These benchmarks are like a slap to SSD's and a punch to HDD's on a SAN.

  • FalzoFalzo Member
    edited November 2020

    @TimboJones said:

    @Falzo said:
    it looks like they limit single network connection to a max of ~50Mbit

    No. This is across the country, topping out at gigabit rates. Oh wait, did you select the 100Mbps unlimited option instead of Gigabit for 4TB then 10Mbps throttled?

    hmm, I am on the 4TB @ Gbps and could easily reach the Gbps with iperf in parallel as well (checked before I wrote the post above)
    now that you mention it I tried some closeby targets with iperf in single connection mode as well, and you are right, it easily goes up for those as well...

    still for real world transfer I mounted a storage via sshfs and when copying actual data (512MB borg backup chunks) from it I achieve the mentioned ~6 MB/s per thread I have running...
    I did not want to overdo but I at least tried and could easily scale to 4x 6MB/s by running transfers in parallel. so there simply has to be some artificial limit per process somewhere. if it's not the network, maybe it's on write thread to the SAN or whatever 🤷‍♂️

    again, that's NO complaint from my side, I rather think it's exactly the right approach to try and balance usage - it's a shared server after all.

    Thanked by 1TimboJones
  • eventhough i have renewed last year plan,

    i wish @servarica_hani should have offered mouse plan this year too.

    many ppl would have benefited from it while saving ipv4s

    Thanked by 1Shot2
  • @Falzo said:

    @TimboJones said:

    @Falzo said:
    it looks like they limit single network connection to a max of ~50Mbit

    No. This is across the country, topping out at gigabit rates. Oh wait, did you select the 100Mbps unlimited option instead of Gigabit for 4TB then 10Mbps throttled?

    hmm, I am on the 4TB @ Gbps and could easily reach the Gbps with iperf in parallel as well (checked before I wrote the post above)
    now that you mention it I tried some closeby targets with iperf in single connection mode as well, and you are right, it easily goes up for those as well...

    still for real world transfer I mounted a storage via sshfs and when copying actual data (512MB borg backup chunks) from it I achieve the mentioned ~6 MB/s per thread I have running...
    I did not want to overdo but I at least tried and could easily scale to 4x 6MB/s by running transfers in parallel. so there simply has to be some artificial limit per process somewhere. if it's not the network, maybe it's on write thread to the SAN or whatever 🤷‍♂️

    again, that's NO complaint from my side, I rather think it's exactly the right approach to try and balance usage - it's a shared server after all.

    Meanwhile, I tried to install sshfs

    choco@POLAR:~$ time (sudo apt install sshfs -y)
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      fuse
    The following NEW packages will be installed:
      fuse sshfs
    0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
    Need to get 118 kB of archives.
    After this operation, 268 kB of additional disk space will be used.
    Get:1 http://ftp.ca.debian.org/debian buster/main amd64 fuse amd64 2.9.9-1+deb10u1 [72.3 kB]
    Get:2 http://ftp.ca.debian.org/debian buster/main amd64 sshfs amd64 2.10+repack-2 [45.4 kB]
    Fetched 118 kB in 0s (1,390 kB/s) 
    Selecting previously unselected package fuse.
    (Reading database ... 61114 files and directories currently installed.)
    Preparing to unpack .../fuse_2.9.9-1+deb10u1_amd64.deb ...
    Unpacking fuse (2.9.9-1+deb10u1) ...
    Selecting previously unselected package sshfs.
    Preparing to unpack .../sshfs_2.10+repack-2_amd64.deb ...
    Unpacking sshfs (2.10+repack-2) ...
    Setting up fuse (2.9.9-1+deb10u1) ...
    update-initramfs: deferring update (trigger activated)
    Setting up sshfs (2.10+repack-2) ...
    Processing triggers for man-db (2.8.5-2) ...
    Processing triggers for initramfs-tools (0.133+deb10u1) ...
    update-initramfs: Generating /boot/initrd.img-4.19.0-12-amd64
    
    real    4m0.333s
    user    0m14.784s
    sys     0m21.595s
    
  • @chocolateshirt said:

    real 4m0.333s
    user 0m14.784s
    sys 0m21.595s
    ```

    that's how it looked for me before as well, fio numbers had to be considered 'optimistic', but I think that was due to ZFS caching layers.

    maybe open a ticket and ask to be reprovisioned on another node? eventually it's just that one and the zpool acts up or whatever.

    I guess ideally @servarica_hani puts together all torrenters and streamer/reencoder in one node and the real storage users in another 🤷‍♂️😂

  • my VPS was re-provisioned on another node and the difference to the previous one is like night and day.

  • edoarudo5edoarudo5 Member
    edited November 2020

    So far, my best purchase this BF. Great support(wasn't expecting this to be better than the other premium provider I bought from) and the VM works fine.

  • ahh why did i miss this :(

  • Wow. You guys broke zfs.

    Maybe it would have been best if staff promised a 7day deploy window, so that they could roll out VMs once every 30-60mins.

    Simple solution to a human vice problem (benchmarking)

  • servarica_haniservarica_hani Member, Patron Provider

    Sorry for the delay in answering here
    Was working fully on the new rack

    for the performance what happened is a storm of users filling and testing their vms to the max which we have never seen before

    the difference between here and last time is that
    1- We got the same number of users that we usually get in 2 to 4 weeks of posting the offer in less than 12 hours

    2- all past offers we had only 100mbps network which made flling the vps slower which helped in the first few days

    to fix the issue we are currently moving some users to different nodes and we are suspending extreme abusers

    as I said the performance should be back in few days and just in case we are extending refund window by another week so you have 2 weeks to ask for refund if you didn't like the performance

    to fix this issue we have a plan for future offers but unfortunatly it will not work for the restock in less than 2 weeks(too late to implement it)

  • @servarica_hani thanks for the update :)

  • @servarica_hani said:

    to fix the issue we are currently moving some users to different nodes and we are suspending extreme abusers

    What is an extreme abuser in this case? If a person starts using the service, uses that unlimited 100mbps and fills the disk, it's considered an abuser?

  • servarica_haniservarica_hani Member, Patron Provider

    @default said:

    @servarica_hani said:

    to fix the issue we are currently moving some users to different nodes and we are suspending extreme abusers

    What is an extreme abuser in this case? If a person starts using the service, uses that unlimited 100mbps and fills the disk, it's considered an abuser?

    no
    currently the only case we consider extreme is people who do torrent non stop and other p2p file sharing

    Thanked by 1user123
  • user123user123 Member
    edited November 2020

    @servarica_hani said:

    @default said:

    @servarica_hani said:

    to fix the issue we are currently moving some users to different nodes and we are suspending extreme abusers

    What is an extreme abuser in this case? If a person starts using the service, uses that unlimited 100mbps and fills the disk, it's considered an abuser?

    no
    currently the only case we consider extreme is people who do torrent non stop and other p2p file sharing

    What if they're downloading really good pr0n? /s

    ETA: and are willing to share it lol

    Thanked by 1default
  • Installed DA on mine, using it to be a dedicated FTP server for my automated backups and hosted a parking page for a single domain name. 40+ MB/s transferring between servers, exceeds my requirements by a lot.

  • @edoarudo5 said:
    Installed DA on mine, using it to be a dedicated FTP server for my automated backups and hosted a parking page for a single domain name. 40+ MB/s transferring between servers, exceeds my requirements by a lot.

    Why is @edoarudo5 banned?

  • @timelapse said:

    @edoarudo5 said:
    Installed DA on mine, using it to be a dedicated FTP server for my automated backups and hosted a parking page for a single domain name. 40+ MB/s transferring between servers, exceeds my requirements by a lot.

    Why is @edoarudo5 banned?

    I'm not banned.

    Thanked by 2ilke Chronic
  • @timelapse said:

    @edoarudo5 said:
    Installed DA on mine, using it to be a dedicated FTP server for my automated backups and hosted a parking page for a single domain name. 40+ MB/s transferring between servers, exceeds my requirements by a lot.

    Why is @edoarudo5 banned?

    he is not, yet. just childish.

  • I hope I don't get banned just for changing my avatar.

  • @edoarudo5 said:

    @timelapse said:

    @edoarudo5 said:
    Installed DA on mine, using it to be a dedicated FTP server for my automated backups and hosted a parking page for a single domain name. 40+ MB/s transferring between servers, exceeds my requirements by a lot.

    Why is @edoarudo5 banned?

    I'm not banned.

    Haha damn you got me

    Thanked by 1vimalware
  • Mahfuz_SS_EHLMahfuz_SS_EHL Host Rep, Veteran

    Maybe I was lucky, neither I faced the IO Issue nor network. IO remained 130-140 MBps & Network 850+ mbps.

  • ever thought about your neighbours, who wish to use their portion of it as well?

    Thanked by 1skorous
  • io seems much better than yesterday .

    Thanked by 1Falzo
  • @cece said:
    io seems much better than yesterday .

    can confirm, it's increased and stable - very happy!

  • I have not yet resume my rclone transfer..

  • Read/write speeds are better since yesterday. Will probably wait 1-2 days more before using it after hopefully everything settles down 😊

  • @edoarudo5 said:
    I hope I don't get banned just for changing my avatar.

    Maybe a warning. It will make people think you're a dick, in case you care.

    Thanked by 1Falzo
  • Got in a little late. I put my name in the form for new stock. I assume we will still get it at the BF price?

Sign In or Register to comment.