Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Advertise on LowEndTalk.com
Quick review of the FlowVPS BF 2019 VPS
New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

Quick review of the FlowVPS BF 2019 VPS

Christmas Day finally came today, the FlowVPS super BF 2019 preorder special has been provisioned. For those not aware or need reminding, here's the deal:

4 vCPU
4GB RAM
15GB NVMe - Primary
100GB SSD - Secondary (Not bootable)
1.5TB Data
1 IPv4

Some extremely lucky people got this for $15AUD/qtr, and various discounts until the regular $45AUD/qtr price (or $120AUD/year) which you can still get here: https://billing.flowvps.com/cart.php?a=add&pid=28&billingcycle=annually

$10 AUD per month for those specs is insane (that's about $7 USD) and considering the fact that it's in Melbourne, Australia on an extremely nice network and a location that's suitable for Australians, this is a steal. Just based on those facts alone, this is already exothermic potassium in my books.

On to the initial benchmark:

---------------------------------------------------------------------------
 Region: Global  https://bench.monster v.1.4.9 2019-12-24 
 Usage : curl -LsO bench.monster/speedtest.sh; bash speedtest.sh -Global


---------------------------------------------------------------------------
 OS           : Debian GNU/Linux 9 (64 Bit)
 Virt/Kernel  : KVM / 4.9.0-6-amd64
 CPU Model    : QEMU Virtual CPU version 2.5+
 CPU Cores    : 4 @ 2599.998 MHz x86_64 16384 KB Cache
 CPU Flags    : AES-NI Disabled & VM-x/AMD-V Disabled
 Load Average : 0.14, 0.05, 0.01
 Total Space  : 113G (1020M ~1% used)
 Total RAM    : 3955 MB (46 MB + 304 MB Buff in use)
 Total SWAP   : 255 MB (0 MB in use)
 Uptime       : 0 days 0:13
---------------------------------------------------------------------------
 ASN & ISP    : AS136557, Host Universal Pty Ltd
 Organization : FlowVPS
 Location     : Melbourne, Australia / AU
 Region       : Victoria
---------------------------------------------------------------------------

 Performing Geekbench v4 CPU Benchmark test. Please wait...
 ## Geekbench v4 CPU Benchmark:

  Single Core : 2375  (GOOD)
   Multi Core : 8623

 ## IO Test

 CPU Speed:
    bzip2     :  79.5 MB/s
   sha256     : 150 MB/s
   md5sum     : 397 MB/s

 RAM Speed:
   Avg. write : 1464.4 MB/s
   Avg. read  : 3857.1 MB/s

 Disk Speed:
   1st run    : 445 MB/s
   2nd run    : 436 MB/s
   3rd run    : 368 MB/s
   -----------------------
   Average    : 416.3 MB/s

 ## Global Speedtest

 Location                       Upload           Download         Ping   
---------------------------------------------------------------------------
 Speedtest.net                  909.58 Mbit/s    936.92 Mbit/s    1.764 ms
 USA, New York (AT&T)           22.32 Mbit/s     16.03 Mbit/s    211.567 ms
 USA, Chicago (Windstream)      30.53 Mbit/s     99.00 Mbit/s    235.874 ms
 USA, Dallas (Frontier)         47.90 Mbit/s     87.48 Mbit/s    234.247 ms
 USA, Miami (Frontier)          67.58 Mbit/s     113.34 Mbit/s   209.820 ms
 USA, Los Angeles (Spectrum)    67.37 Mbit/s     134.28 Mbit/s   228.332 ms
 UK, London (Community Fibre)   11.81 Mbit/s     69.93 Mbit/s    288.074 ms
 France, Lyon (SFR)             18.50 Mbit/s     45.81 Mbit/s    298.372 ms
 Germany, Berlin (DNS:NET)      19.98 Mbit/s     49.83 Mbit/s    293.129 ms
 Spain, Madrid (MasMovil)       14.36 Mbit/s     49.04 Mbit/s    288.049 ms
 Italy, Rome (Unidata)          18.68 Mbit/s     49.62 Mbit/s    275.746 ms
 Russia, Moscow (MTS)           17.13 Mbit/s     42.69 Mbit/s    330.809 ms
 Israel, Haifa (013Netvision)   15.31 Mbit/s     35.96 Mbit/s    352.838 ms
 India, New Delhi (GIGATEL)     111.37 Mbit/s    300.88 Mbit/s   162.172 ms
 Singapore (FirstMedia)         180.25 Mbit/s    164.40 Mbit/s    82.545 ms
 Japan, Tsukuba (SoftEther)     22.34 Mbit/s     64.39 Mbit/s    244.249 ms
 Australia, Sydney (Yes Optus)  683.76 Mbit/s    797.53 Mbit/s    12.575 ms
 RSA, Randburg (Cool Ideas)     8.16 Mbit/s      12.11 Mbit/s    451.473 ms
 Brazil, Sao Paulo (Criare)     12.64 Mbit/s     28.84 Mbit/s    328.557 ms
---------------------------------------------------------------------------

 Finished in : 12 min 10 sec
 Timestamp   : 2019-12-31 07:01:08 GMT
 Saved in    : /root/speedtest.log

 Share results:
 - http://www.speedtest.net/result/8902506887.png
 - https://browser.geekbench.com/v4/cpu/15089465
 - https://clbin.com/RkoAD

Sweeeeeet! I love the network. Anyone in Australia understands how shitty the overseas connections can be, but this is a very decent indicator of the best possible speeds that are possible to the tested locations. Singapore in particular is excellent, I think there must be some very premium routing to get that through Perth instead of the long way around.

If you're in SGP, this is an amazing choice for you to locate your servers.

I'm going to use this as a small Plex VPS, hooked up to an unlimited google drive account, and set up rclone cache on the secondary drive. The goal would be to have a fine tuned Plex instance that my family can use, while being mindful of the shared resources (CPU and Network)

The first thing you will need to do after logging into your VPS is to set up the secondary disk. Use your favourite tool to do that, I just used fdisk and mkfs.ext4 and an entry in fstab. If that went above your head, I'll follow up in this thread with a step by step, so please let me know if that would help.

I do plan to document my usage of this VPS, since it's quite unique to me in the location, network, premium FlowVPS and insane price, and hopefully someone can benefit from my experiences.

If anyone can find a 4 vCPU/4GB RAM/115GB of SSD and 1.5TB of data in Melbourne for $10 AUD I'll eat my shorts. Even $20 would be pushing it. A bargain at twice the price.

I suggest to anyone who missed this, or who is savvy enough to recognize a good deal to buy this while you still can. The provider is premium, the network is premium, the offer is premium and the price is insanely cheap. Worst case and I give up on Plex, I'll still keep it idling forever ;)

Purveyor of high quality potassium

Comments

  • I too use FlowVPS and have grown to be friends with trewq over the past year or so. The support you receive is personal, he cares if something is wrong and will try and fix it for you.

    If you need a good AU VPS at a good price, this is your place.

    Thanked by 2dahartigan trewq
  • @dahartigan said:
    Christmas Day finally came today, the FlowVPS super BF 2019 preorder special has been provisioned. For those not aware or need reminding, here's the deal:

    4 vCPU
    4GB RAM
    15GB NVMe - Primary
    100GB SSD - Secondary (Not bootable)
    1.5TB Data
    1 IPv4
    

    Some extremely lucky people got this for $15AUD/qtr

    Yup, I'm so lucky.

    @dahartigan said:

    $10 AUD per month for those specs is insane (that's about $7 USD) and considering the fact that it's in Melbourne, Australia on an extremely nice network and a location that's suitable for Australians, this is a steal. Just based on those facts alone, this is already exothermic potassium in my books.
    The provider is premium, the network is premium, the offer is premium and the price is insanely cheap.

    Absolutely true!

    Thanked by 2dahartigan trewq

    Shared: Smallweb| VPS: NexusBytes, FlowVPS, VirMach, InceptionHosting, Linode
    Gapps legacy 100/200 users cheap 4 sale. PM

  • @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    Thanked by 2dahartigan trewq

    Shared: Smallweb| VPS: NexusBytes, FlowVPS, VirMach, InceptionHosting, Linode
    Gapps legacy 100/200 users cheap 4 sale. PM

  • @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    Purveyor of high quality potassium

  • Wow! Nice network for asia.

    Thanked by 1trewq
  • @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    Shared: Smallweb| VPS: NexusBytes, FlowVPS, VirMach, InceptionHosting, Linode
    Gapps legacy 100/200 users cheap 4 sale. PM

  • That was quick! Plan to set up my VPS tonight.

    Thanked by 1trewq
  • @sonic said:

    @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    My advice would be to install rclone first, then set up onedrive with it. Once you're comfortable with that, install Plex, then after that combine them. Baby steps :-)

    I don't have a guide ready to go, but it's something I will write up and post here when I get the chance.

    As for optimizing Plex, the biggest impact comes from transcoding - if your clients can direct play, you can in most cases force it. If transcoding is needed (which is normal) then limiting the source files to lower bitrates/qualities can help, after that it's fine tuning.

    If you can get rclone set up and install plex, the rest will come as needed. I honestly hope that helps you :-)

    Thanked by 1sonic

    Purveyor of high quality potassium

  • JordJord Moderator, Provider

    @trewq know's how to deploy some BANGING HARDWARE. I will one day get one of his prem VPS's. But he is PREM.

    Thanked by 1trewq

    BaseServ Ltd - UK Shared DirectAdmin Hosting | Litespeed + Cloudlinux + Free Backups + Free Transfers.
    BaseServ Certified to ISO/IEC 27001:2013

  • trewqtrewq Administrator, Moderator, Provider

    Thanks @dahartigan and everyone else for your kind words. Happy New Year!

    Thanked by 1dahartigan
  • Here's another bench with the CPU flags passed through. It also reveals that the processor is an E5-2630 v2 which is actually a really decent processor. A little birdie tells me the node has 2 of these in it ;)

    ---------------------------------------------------------------------------
     OS           : Debian GNU/Linux 9 (64 Bit)
     Virt/Kernel  : KVM / 4.9.0-6-amd64
     CPU Model    : Intel(R) Xeon(R) CPU E5-2630 v2 @ 2.60GHz
     CPU Cores    : 4 @ 2599.998 MHz x86_64 16384 KB Cache
     CPU Flags    : AES-NI Enabled & VM-x/AMD-V Disabled
     Load Average : 0.04, 0.02, 0.00
     Total Space  : 113G (1021M ~1% used)
     Total RAM    : 3955 MB (48 MB + 102 MB Buff in use)
     Total SWAP   : 255 MB (0 MB in use)
     Uptime       : 0 days 0:1
    ---------------------------------------------------------------------------
     ASN & ISP    : AS136557, Host Universal Pty Ltd
     Organization : FlowVPS
     Location     : Melbourne, Australia / AU
     Region       : Victoria
    ---------------------------------------------------------------------------
    
     ## Geekbench v4 CPU Benchmark:
    
      Single Core : 2643  (GOOD)
       Multi Core : 9348
    
     ## IO Test
    
     CPU Speed:
        bzip2     :  81.9 MB/s
       sha256     : 158 MB/s
       md5sum     : 379 MB/s
    
     RAM Speed:
       Avg. write : 1718.3 MB/s
       Avg. read  : 4096.0 MB/s
    
     Disk Speed:
       1st run    : 491 MB/s
       2nd run    : 471 MB/s
       3rd run    : 528 MB/s
       -----------------------
       Average    : 496.7 MB/s
    
     ## Global Speedtest
    
     Location                       Upload           Download         Ping   
    ---------------------------------------------------------------------------
     Speedtest.net                  927.05 Mbit/s    938.09 Mbit/s    1.366 ms
     USA, New York (AT&T)           20.74 Mbit/s     17.50 Mbit/s    211.451 ms
     USA, Chicago (Windstream)      44.46 Mbit/s     74.93 Mbit/s    235.227 ms
     USA, Dallas (Frontier)         61.44 Mbit/s     119.89 Mbit/s   234.241 ms
     USA, Miami (Frontier)          67.49 Mbit/s     144.93 Mbit/s   209.714 ms
     USA, Los Angeles (Spectrum)    66.60 Mbit/s     134.65 Mbit/s   228.318 ms
     UK, London (Community Fibre)   20.59 Mbit/s     71.97 Mbit/s    288.050 ms
     France, Lyon (SFR)             18.50 Mbit/s     44.95 Mbit/s    297.711 ms
     Germany, Berlin (DNS:NET)      17.62 Mbit/s     53.26 Mbit/s    293.224 ms
     Spain, Madrid (MasMovil)       15.09 Mbit/s     46.51 Mbit/s    288.005 ms
     Italy, Rome (Unidata)          18.75 Mbit/s     53.88 Mbit/s    275.713 ms
     Russia, Moscow (MTS)           18.34 Mbit/s     28.35 Mbit/s    330.843 ms
     Israel, Haifa (013Netvision)   14.48 Mbit/s     41.08 Mbit/s    356.295 ms
     India, New Delhi (GIGATEL)     109.67 Mbit/s    324.86 Mbit/s   161.912 ms
     Singapore (FirstMedia)         177.99 Mbit/s    172.28 Mbit/s    82.123 ms
     Japan, Tsukuba (SoftEther)     9.80 Mbit/s      57.45 Mbit/s    222.817 ms
     Australia, Sydney (Yes Optus)  683.12 Mbit/s    814.43 Mbit/s    12.367 ms
     RSA, Randburg (Cool Ideas)     11.70 Mbit/s     15.15 Mbit/s    451.207 ms
     Brazil, Sao Paulo (Criare)     17.48 Mbit/s     49.11 Mbit/s    328.582 ms
    ---------------------------------------------------------------------------
    
     Finished in : 11 min 47 sec
     Timestamp   : 2020-01-01 01:45:18 GMT
     Saved in    : /root/speedtest.log
    
     Share results:
     - http://www.speedtest.net/result/8904543778.png
     - https://browser.geekbench.com/v4/cpu/15091639
     - https://clbin.com/ZDcXb
    

    :)

    Thanked by 1uptime

    Purveyor of high quality potassium

  • I've transferred my Plex over to see how it performs. So far I'm seeing impressive results :-) The connection to Google Drive is actually faster than what I can get in LA from a different provider, which is a delightful surprise.

    Lately I've been going on an AMD kick but these e5 processors pack some serious potassium and are very strong on the plow.

    The network impresses me a lot more than I was expecting, but that makes sense because trewq runs his own network and fine tunes his peering in a proactive way. Probably why gdrive is so fast..

    If the deal is still active and you don't have one, I urge you to snap one up before the regret kicks in :-)

    Thanked by 1poisson

    Purveyor of high quality potassium

  • @dahartigan said:
    I've transferred my Plex over to see how it performs. So far I'm seeing impressive results :-) The connection to Google Drive is actually faster than what I can get in LA from a different provider, which is a delightful surprise.

    Lately I've been going on an AMD kick but these e5 processors pack some serious potassium and are very strong on the plow.

    The network impresses me a lot more than I was expecting, but that makes sense because trewq runs his own network and fine tunes his peering in a proactive way. Probably why gdrive is so fast..

    If the deal is still active and you don't have one, I urge you to snap one up before the regret kicks in :-)

    Looks like premium APAC location

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Follow latest deals on Twitter or Telegram

  • Waiting for 15 deal

    Remember the value of LET is purely based on its traffic.

  • sonicsonic Member

    @cybertech said:
    Waiting for 15 deal

    Dont wait! It wont come back!

    Thanked by 1cybertech

    Shared: Smallweb| VPS: NexusBytes, FlowVPS, VirMach, InceptionHosting, Linode
    Gapps legacy 100/200 users cheap 4 sale. PM

  • @sonic said:

    @cybertech said:
    Waiting for 15 deal

    Dont wait! It wont come back!

    Ok then I won't wait

    Remember the value of LET is purely based on its traffic.

  • @cybertech said:
    Waiting for 15 deal

    I'd love to see it come back, but I think we'll probably see flying cars and IPv6 in CC before that happens :)

    @poisson said:

    @dahartigan said:
    I've transferred my Plex over to see how it performs. So far I'm seeing impressive results :-) The connection to Google Drive is actually faster than what I can get in LA from a different provider, which is a delightful surprise.

    Lately I've been going on an AMD kick but these e5 processors pack some serious potassium and are very strong on the plow.

    The network impresses me a lot more than I was expecting, but that makes sense because trewq runs his own network and fine tunes his peering in a proactive way. Probably why gdrive is so fast..

    If the deal is still active and you don't have one, I urge you to snap one up before the regret kicks in :-)

    Looks like premium APAC location

    Super premium actually. I think most people outside of Australia wouldn't have heard of Melbourne as it's not as famous as Sydney, but it's in a location that makes it really well connected to all of Australia and the APAC region in general.

    @cybertech said:

    @sonic said:

    @cybertech said:
    Waiting for 15 deal

    Dont wait! It wont come back!

    Ok then I won't wait

    If you want a good connection to SGP on a powerful server then get this before it's too late. I found that the link for $45/qtr paid quarterly is still valid if you didn't want to go yearly: https://billing.flowvps.com/cart.php?a=add&pid=28&billingcycle=quarterly

    Thanked by 2cybertech bdl

    Purveyor of high quality potassium

  • vyas11vyas11 Member
    edited January 3

    @dahartigan said:

    @cybertech said:
    Waiting for 15 deal

    I'd love to see it come back, but I think we'll probably see flying cars and IPv6 in CC before that happens :)

    @poisson said:

    @dahartigan said:
    I've transferred my Plex over to see how it performs. So far I'm seeing impressive results :-) The connection to Google Drive is actually faster than what I can get in LA from a different provider, which is a delightful surprise.

    Lately I've been going on an AMD kick but these e5 processors pack some serious potassium and are very strong on the plow.

    The network impresses me a lot more than I was expecting, but that makes sense because trewq runs his own network and fine tunes his peering in a proactive way. Probably why gdrive is so fast..

    If the deal is still active and you don't have one, I urge you to snap one up before the regret kicks in :-)

    Looks like premium APAC location

    Super premium actually. I think most people outside of Australia wouldn't have heard of Melbourne as it's not as famous as Sydney, but it's in a location that makes it really well connected to all of Australia and the APAC region in general.

    @cybertech said:

    @sonic said:

    @cybertech said:
    Waiting for 15 deal

    Dont wait! It wont come back!

    Ok then I won't wait

    If you want a good connection to SGP on a powerful server then get this before it's too late. I found that the link for $45/qtr paid quarterly is still valid if you didn't want to go yearly: https://billing.flowvps.com/cart.php?a=add&pid=28&billingcycle=quarterly

    @dahartigan said:

    @cybertech said:
    Waiting for 15 deal

    I'd love to see it come back, but I think we'll probably see flying cars and IPv6 in CC before that happens :)

    @poisson said:

    @dahartigan said:
    I've transferred my Plex over to see how it performs. So far I'm seeing impressive results :-) The connection to Google Drive is actually faster than what I can get in LA from a different provider, which is a delightful surprise.

    Lately I've been going on an AMD kick but these e5 processors pack some serious potassium and are very strong on the plow.

    The network impresses me a lot more than I was expecting, but that makes sense because trewq runs his own network and fine tunes his peering in a proactive way. Probably why gdrive is so fast..

    If the deal is still active and you don't have one, I urge you to snap one up before the regret kicks in :-)

    Looks like premium APAC location

    Super premium actually. I think most people outside of Australia wouldn't have heard of Melbourne as it's not as famous as Sydney, but it's in a location that makes it really well connected to all of Australia and the APAC region in general.

    @cybertech said:

    @sonic said:

    @cybertech said:
    Waiting for 15 deal

    Dont wait! It wont come back!

    Ok then I won't wait

    If you want a good connection to SGP on a powerful server then get this before it's too late. I found that the link for $45/qtr paid quarterly is still valid if you didn't want to go yearly: https://billing.flowvps.com/cart.php?a=add&pid=28&billingcycle=quarterly

    That’s about 31.5 US Dollars per quarter, you may want to check with @trewq about the 10 percent GST/ tax that gets added. If that is a must pay even for non Australia folks, that will push it to nearly 12 US Dollars a month. Still a good deal for the specs!

  • sonicsonic Member

    This powerful box is so smooth.

    Shared: Smallweb| VPS: NexusBytes, FlowVPS, VirMach, InceptionHosting, Linode
    Gapps legacy 100/200 users cheap 4 sale. PM

  • bdlbdl Member

    I also love mine - it gives me maj0r b0nar

    Thanked by 1dahartigan
  • Daniel15Daniel15 Member
    edited January 4

    Can you please post a fio benchmark? I tried FlowVPS around a year ago and had an issue with very slow disk I/O - my nightly Borg backups were totally killing performance, and Debian package installs/updates were very very slow. dd didn't really show the issue as sequential reads/writes were okay-ish, but fio random read/writes did show it. I ended up moving to a different provider.

    Seems like it might be fixed on newer nodes, given your positive results. In that case I might try FlowVPS again.

  • @Daniel15 said:
    Can you please post a fio benchmark? I tried FlowVPS around a year ago and had an issue with very slow disk I/O - my nightly Borg backups were totally killing performance, and Debian package installs/updates were very very slow. dd didn't really show the issue as sequential reads/writes were okay-ish, but fio random read/writes did show it. I ended up moving to a different provider.

    Seems like it might be fixed on newer nodes, given your positive results. In that case I might try FlowVPS again.

    Will do, give me a little bit to organise that and I'll reply back here with the results.

    I definitely haven't noticed any sort of performance issues with the storage, so it's possible that it was either isolated or something that's no longer an issue.

    @seriesn said:
    With FlowVps, you go with the flow!!

    Purveyor of high quality potassium

  • dahartigandahartigan Member
    edited January 4

    @Daniel15 This first result is from the nvme drive and the second result is for the secondary ssd drive.

    First run on NVMe

    [email protected]:~/fio-2.0.9# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
    test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    test: Laying out IO file(s) (1 file(s) / 4096MB)
    Jobs: 1 (f=1): [m] [100.0% done] [190.2M/65910K /s] [48.7K/16.5K iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=23140: Sat Jan  4 11:29:36 2020
      read : io=3071.2MB, bw=207675KB/s, iops=51918 , runt= 15143msec
      write: io=1024.1MB, bw=69305KB/s, iops=17326 , runt= 15143msec
      cpu          : usr=18.14%, sys=78.08%, ctx=2552, majf=0, minf=4
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=786206/w=262370/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=3071.2MB, aggrb=207675KB/s, minb=207675KB/s, maxb=207675KB/s, mint=15143msec, maxt=15143msec
      WRITE: io=1024.1MB, aggrb=69304KB/s, minb=69304KB/s, maxb=69304KB/s, mint=15143msec, maxt=15143msec
    
    Disk stats (read/write):
      vda: ios=776662/259125, merge=0/25, ticks=191532/42552, in_queue=210176, util=97.80%
    [email protected]:~/fio-2.0.9# 
    

    Second run on SSD:

    [email protected]:~/fio-2.0.9# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/vdb/test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
    test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    test: Laying out IO file(s) (1 file(s) / 4096MB)
    Jobs: 1 (f=1): [m] [100.0% done] [155.1M/53077K /s] [39.1K/13.3K iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=3673: Sat Jan  4 11:32:04 2020
      read : io=3073.8MB, bw=158103KB/s, iops=39525 , runt= 19908msec
      write: io=1022.3MB, bw=52581KB/s, iops=13145 , runt= 19908msec
      cpu          : usr=16.02%, sys=66.19%, ctx=19559, majf=0, minf=4
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=786878/w=261698/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=3073.8MB, aggrb=158102KB/s, minb=158102KB/s, maxb=158102KB/s, mint=19908msec, maxt=19908msec
      WRITE: io=1022.3MB, aggrb=52581KB/s, minb=52581KB/s, maxb=52581KB/s, mint=19908msec, maxt=19908msec
    
    Disk stats (read/write):
      vdb: ios=785179/261248, merge=0/60, ticks=582468/152592, in_queue=669316, util=96.49%
    [email protected]:~/fio-2.0.9# 
    

    Looks pretty performant to me, perhaps I missed something though, so if you have a particular fio command you'd like to see the results for let me know.

    Purveyor of high quality potassium

  • vyas11vyas11 Member
    edited January 4

    I PM'd @Daniel15 with my results - NVMe only - keep forgetting the SSD. :-)

    Edit: Posting the results
    `fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=0 --size=256M --numjobs=8 --runtime=30 --group_reporting
    randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
    ...
    fio-3.12
    Starting 8 processes
    randwrite: Laying out IO file (1 file / 256MiB)
    randwrite: Laying out IO file (1 file / 256MiB)
    randwrite: Laying out IO file (1 file / 256MiB)
    randwrite: Laying out IO file (1 file / 256MiB)
    randwrite: Laying out IO file (1 file / 256MiB)
    randwrite: Laying out IO file (1 file / 256MiB)
    randwrite: Laying out IO file (1 file / 256MiB)
    randwrite: Laying out IO file (1 file / 256MiB)
    Jobs: 7 (f=7): [w(2),_(1),w(5)][88.9%][w=113MiB/s][w=28.0k IOPS][eta 00m:03s]
    randwrite: (groupid=0, jobs=8): err= 0: pid=818: Fri Jan 3 20:22:01 2020
    write: IOPS=21.5k, BW=83.9MiB/s (87.9MB/s)(2048MiB/24417msec); 0 zone resets
    slat (usec): min=10, max=205236, avg=347.00, stdev=4295.80
    clat (usec): min=3, max=40040, avg= 6.79, stdev=162.34
    lat (usec): min=15, max=205246, avg=357.08, stdev=4300.63
    clat percentiles (usec):
    | 1.00th=[ 4], 5.00th=[ 4], 10.00th=[ 4], 20.00th=[ 4],
    | 30.00th=[ 5], 40.00th=[ 5], 50.00th=[ 5], 60.00th=[ 5],
    | 70.00th=[ 5], 80.00th=[ 5], 90.00th=[ 6], 95.00th=[ 7],
    | 99.00th=[ 16], 99.50th=[ 23], 99.90th=[ 89], 99.95th=[ 188],
    | 99.99th=[11994]
    bw ( KiB/s): min= 2240, max=85660, per=12.27%, avg=10535.51, stdev=12365.05, samples=383
    iops : min= 560, max=21415, avg=2633.84, stdev=3091.27, samples=383
    lat (usec) : 4=25.25%, 10=72.61%, 20=1.58%, 50=0.38%, 100=0.09%
    lat (usec) : 250=0.05%, 500=0.01%, 750=0.01%, 1000=0.01%
    lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
    cpu : usr=2.90%, sys=7.92%, ctx=9484, majf=0, minf=83
    IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    issued rwts: total=0,524288,0,0 short=0,0,0,0 dropped=0,0,0,0
    latency : target=0, window=0, percentile=100.00%, depth=1

    Run status group 0 (all jobs):
    WRITE: bw=83.9MiB/s (87.9MB/s), 83.9MiB/s-83.9MiB/s (87.9MB/s-87.9MB/s), io=2048MiB (2147MB), run=24417-24417msec

    Disk stats (read/write):
    vda: ios=0/140000, merge=0/64648, ticks=0/98908, in_queue=99004, util=86.50%`

  • dahartigandahartigan Member
    edited January 4

    @vyas11 said:
    I PM'd @Daniel15 with my results - NVMe only - keep forgetting the SSD. :-)

    Would you mind sharing them here too? I also wonder if we both just smashed the drives at the same time right now? lol I only did the extra fio test on the SSD because I felt it would be a complete picture of the situation.

    Having SSD as a secondary storage is actually premium really, usually it's HDD (sometimes local, otherwise a network storage)

    EDIT: I'm going to throw this out there, but if anyone is reading this who has one of these deals but you aren't using the SSD because you're not sure how to set it up, please let me know (you can PM if you like) and I'll put a step by step in here on setting it up.

    Purveyor of high quality potassium

  • @dahartigan can run two more fio tests with the following flags instead:

    --bs=64k
    --bs=256k

    Just to see how the drives perform when stress tested.

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Follow latest deals on Twitter or Telegram

  • @poisson said:
    @dahartigan can run two more fio tests with the following flags instead:

    --bs=64k
    --bs=256k

    Just to see how the drives perform when stress tested.

    Sure thing sir.

    NVMe 64k

    [email protected]:~/fio-2.0.9# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 --bs=64k
    test: (g=0): rw=randrw, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    Jobs: 1 (f=1): [m] [100.0% done] [431.7M/146.9M /s] [6906 /2349  iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=7042: Sat Jan  4 11:44:06 2020
      read : io=3072.0MB, bw=439900KB/s, iops=6873 , runt=  7151msec
      write: io=1024.0MB, bw=146633KB/s, iops=2291 , runt=  7151msec
      cpu          : usr=5.96%, sys=26.80%, ctx=8238, majf=0, minf=4
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=49152/w=16384/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=3072.0MB, aggrb=439900KB/s, minb=439900KB/s, maxb=439900KB/s, mint=7151msec, maxt=7151msec
      WRITE: io=1024.0MB, aggrb=146633KB/s, minb=146633KB/s, maxb=146633KB/s, mint=7151msec, maxt=7151msec
    
    Disk stats (read/write):
      vda: ios=47411/15835, merge=0/21, ticks=210484/196036, in_queue=353540, util=97.26%
    [email protected]:~/fio-2.0.9# 
    

    NVMe 256k

    [email protected]:~/fio-2.0.9# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 --bs=256k
    test: (g=0): rw=randrw, bs=256K-256K/256K-256K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    Jobs: 1 (f=1): [m] [100.0% done] [607.9M/199.4M /s] [2431 /797  iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=13552: Sat Jan  4 11:45:09 2020
      read : io=3088.8MB, bw=621391KB/s, iops=2427 , runt=  5090msec
      write: io=1007.3MB, bw=202637KB/s, iops=791 , runt=  5090msec
      cpu          : usr=5.13%, sys=16.37%, ctx=5847, majf=0, minf=5
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=12355/w=4029/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=3088.8MB, aggrb=621390KB/s, minb=621390KB/s, maxb=621390KB/s, mint=5090msec, maxt=5090msec
      WRITE: io=1007.3MB, aggrb=202637KB/s, minb=202637KB/s, maxb=202637KB/s, mint=5090msec, maxt=5090msec
    
    Disk stats (read/write):
      vda: ios=11840/3861, merge=0/19, ticks=88932/213268, in_queue=284320, util=96.70%
    [email protected]:~/fio-2.0.9# 
    

    Want SSD too?

    Purveyor of high quality potassium

  • SSD 64k

    [email protected]:~/fio-2.0.9# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/vdb/test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 --bs=64k
    test: (g=0): rw=randrw, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    Jobs: 1 (f=1): [m] [100.0% done] [196.6M/64825K /s] [3143 /1012  iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=26492: Sat Jan  4 11:47:43 2020
      read : io=3073.9MB, bw=160994KB/s, iops=2515 , runt= 19551msec
      write: io=1022.2MB, bw=53538KB/s, iops=836 , runt= 19551msec
      cpu          : usr=2.37%, sys=11.92%, ctx=17632, majf=0, minf=4
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=49181/w=16355/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=3073.9MB, aggrb=160993KB/s, minb=160993KB/s, maxb=160993KB/s, mint=19551msec, maxt=19551msec
      WRITE: io=1022.2MB, aggrb=53537KB/s, minb=53537KB/s, maxb=53537KB/s, mint=19551msec, maxt=19551msec
    
    Disk stats (read/write):
      vdb: ios=48309/16334, merge=0/96, ticks=1098540/102644, in_queue=1036348, util=98.28%
    [email protected]:~/fio-2.0.9# 
    

    SSD 256k

    [email protected]:~/fio-2.0.9# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/vdb/test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 --bs=256k
    test: (g=0): rw=randrw, bs=256K-256K/256K-256K, ioengine=libaio, iodepth=64
    fio-2.0.9
    Starting 1 process
    Jobs: 1 (f=1): [m] [100.0% done] [274.9M/99598K /s] [1099 /389  iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=31559: Sat Jan  4 11:48:34 2020
      read : io=3055.3MB, bw=252447KB/s, iops=986 , runt= 12393msec
      write: io=1040.8MB, bw=85994KB/s, iops=335 , runt= 12393msec
      cpu          : usr=1.61%, sys=5.60%, ctx=3587, majf=0, minf=5
      IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
         issued    : total=r=12221/w=4163/d=0, short=r=0/w=0/d=0
    
    Run status group 0 (all jobs):
       READ: io=3055.3MB, aggrb=252447KB/s, minb=252447KB/s, maxb=252447KB/s, mint=12393msec, maxt=12393msec
      WRITE: io=1040.8MB, aggrb=85994KB/s, minb=85994KB/s, maxb=85994KB/s, mint=12393msec, maxt=12393msec
    
    Disk stats (read/write):
      vdb: ios=12048/4184, merge=0/36, ticks=682804/85988, in_queue=684604, util=96.89%
    [email protected]:~/fio-2.0.9# 
    

    Purveyor of high quality potassium

  • vyas11vyas11 Member
    edited January 4

    @dahartigan said:

    Would you mind sharing them here too? I also wonder if we both just smashed the drives at the same time right now? lol I only did the extra fio test on the SSD because I felt it would be a complete picture of the situation.

    I have updated my above post with the results.

    Below is the test using the same parameters you used: 64 K This is NVMe

    fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 --bs=64k
    test: (g=0): rw=randrw, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=libaio, iodepth=64

    fio-3.12
    Starting 1 process
    test: Laying out IO file (1 file / 4096MiB)
    ^Cbs: 1 (f=1): [m(1)][27.0%][r=3008KiB/s][r=47 IOPS][eta 13m:26s]
    fio: terminating on signal 2
    Jobs: 1 (f=1): [m(1)][27.0%][r=769KiB/s][r=12 IOPS][eta 13m:28s]
    test: (groupid=0, jobs=1): err= 0: pid=988: Fri Jan 3 20:52:28 2020
    read: IOPS=44, BW=2851KiB/s (2919kB/s)(832MiB/298721msec)
    bw ( KiB/s): min= 125, max=419712, per=100.00%, avg=5701.31, stdev=36947.02, samples=298
    iops : min= 1, max= 6558, avg=88.52, stdev=577.36, samples=298
    write: IOPS=14, BW=944KiB/s (967kB/s)(275MiB/298721msec); 0 zone resets
    bw ( KiB/s): min= 127, max=142818, per=100.00%, avg=7846.41, stdev=24391.01, samples=71
    iops : min= 1, max= 2231, avg=122.11, stdev=381.17, samples=71
    cpu : usr=0.17%, sys=0.29%, ctx=3719, majf=0, minf=8
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
    issued rwts: total=13307,4407,0,0 short=0,0,0,0 dropped=0,0,0,0
    latency : target=0, window=0, percentile=100.00%, depth=64

    Run status group 0 (all jobs):
    READ: bw=2851KiB/s (2919kB/s), 2851KiB/s-2851KiB/s (2919kB/s-2919kB/s), io=832MiB (872MB), run=298721-298721msec
    WRITE: bw=944KiB/s (967kB/s), 944KiB/s-944KiB/s (967kB/s-967kB/s), io=275MiB (289MB), run=298721-298721msec

    Disk stats (read/write):
    vda: ios=13415/4479, merge=0/57, ticks=10303152/8055552, in_queue=18390320, util=100.00%

    Also, I have posted my BM results (YABS, behcn.sh, nench.sh, Speedtest) on underworldstartup . Will update it later.

  • The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Follow latest deals on Twitter or Telegram

  • @poisson said:
    The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.

    Purveyor of high quality potassium

  • @dahartigan said:

    @poisson said:
    The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.

    I must qualify that benches are benches, but there are some situations where it may be a concern. For example, you have a busy website that lots of lot of reading and writing to a database per second. These numbers could impact your site's performance negatively in such a case. For personal use, probably you won't notice a thing.

    It could also be that the numbers reflect a full node that is being heavily used, and in that case, the numbers are probably acceptable. However, in such a case, I worry about the NVMe drive wearing out much more quickly than expected because there are obviously some people hammering the disks. I just hope people are not doing, ahem, Linux ISOs on the disks.

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Follow latest deals on Twitter or Telegram

  • @poisson said:

    @dahartigan said:

    @poisson said:
    The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.

    I must qualify that benches are benches, but there are some situations where it may be a concern. For example, you have a busy website that lots of lot of reading and writing to a database per second. These numbers could impact your site's performance negatively in such a case. For personal use, probably you won't notice a thing.

    It could also be that the numbers reflect a full node that is being heavily used, and in that case, the numbers are probably acceptable. However, in such a case, I worry about the NVMe drive wearing out much more quickly than expected because there are obviously some people hammering the disks. I just hope people are not doing, ahem, Linux ISOs on the disks.

    I really hope nobody's crazy enough to be torrenting on these, it's almost certainly not going to end well if that's happening lol

    You're right though, when the drive was stress tested it buckled under pressure, but seems fine otherwise. I don't notice anything wrong as such, everything is fast and the problems mentioned by Daniel, specifically the apt updates taking forever, just don't exist for me. Maybe if I really hammer the drive I could find a problem, but if I'm hammering the drive, I feel like I'm the problem. Same goes for CPU, network, etc imo :-)

    It's kinda like trying to redline my car whenever I drive, something is bound to shake loose.

    Purveyor of high quality potassium

  • @dahartigan said:

    @poisson said:

    @dahartigan said:

    @poisson said:
    The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.

    I must qualify that benches are benches, but there are some situations where it may be a concern. For example, you have a busy website that lots of lot of reading and writing to a database per second. These numbers could impact your site's performance negatively in such a case. For personal use, probably you won't notice a thing.

    It could also be that the numbers reflect a full node that is being heavily used, and in that case, the numbers are probably acceptable. However, in such a case, I worry about the NVMe drive wearing out much more quickly than expected because there are obviously some people hammering the disks. I just hope people are not doing, ahem, Linux ISOs on the disks.

    I really hope nobody's crazy enough to be torrenting on these, it's almost certainly not going to end well if that's happening lol

    You're right though, when the drive was stress tested it buckled under pressure, but seems fine otherwise. I don't notice anything wrong as such, everything is fast and the problems mentioned by Daniel, specifically the apt updates taking forever, just don't exist for me. Maybe if I really hammer the drive I could find a problem, but if I'm hammering the drive, I feel like I'm the problem. Same goes for CPU, network, etc imo :-)

    It's kinda like trying to redline my car whenever I drive, something is bound to shake loose.

    Heh.. I have stress tested repeatedly a couple of other servers and they were fine. Not to say that Flow is bad because I think the disk performance is actually very good for most real world cases, but there are better choices for NVMe performance if that is a mission critical factor. Plus, there are a ton of other factors to consider, e.g. support, location etc, which Flow passes with flying colours.

    Deals and Reviews: LowEndBoxes Review | Avoid dodgy providers with The LEBRE Whitelist | Free hosting (with conditions): Evolution-Host, NanoKVM, FreeMach, ServedEZ | Follow latest deals on Twitter or Telegram

  • @poisson said:

    @dahartigan said:

    @poisson said:

    @dahartigan said:

    @poisson said:
    The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.

    I must qualify that benches are benches, but there are some situations where it may be a concern. For example, you have a busy website that lots of lot of reading and writing to a database per second. These numbers could impact your site's performance negatively in such a case. For personal use, probably you won't notice a thing.

    It could also be that the numbers reflect a full node that is being heavily used, and in that case, the numbers are probably acceptable. However, in such a case, I worry about the NVMe drive wearing out much more quickly than expected because there are obviously some people hammering the disks. I just hope people are not doing, ahem, Linux ISOs on the disks.

    I really hope nobody's crazy enough to be torrenting on these, it's almost certainly not going to end well if that's happening lol

    You're right though, when the drive was stress tested it buckled under pressure, but seems fine otherwise. I don't notice anything wrong as such, everything is fast and the problems mentioned by Daniel, specifically the apt updates taking forever, just don't exist for me. Maybe if I really hammer the drive I could find a problem, but if I'm hammering the drive, I feel like I'm the problem. Same goes for CPU, network, etc imo :-)

    It's kinda like trying to redline my car whenever I drive, something is bound to shake loose.

    Heh.. I have stress tested repeatedly a couple of other servers and they were fine. Not to say that Flow is bad because I think the disk performance is actually very good for most real world cases, but there are better choices for NVMe performance if that is a mission critical factor. Plus, there are a ton of other factors to consider, e.g. support, location etc, which Flow passes with flying colours.

    Well put sir, you certainly are the man when it comes to the certainty of statistical probabilities :-) Your research methodology is something I would never feel qualified to question! :-)

    Channeling my inner @uptime..

    Maybe sir @trewq of qwert can share a few words on the nvme situation, it's not impossible that he's aware of this or is some known limitation (or unknown if that's how the potassium leans) that could explain or at least contribute to the pondering of some potassium enthusiasts gathered in their place of worship. Margarine hat.

    Purveyor of high quality potassium

  • trewqtrewq Administrator, Moderator, Provider

    @dahartigan said:

    @poisson said:

    @dahartigan said:

    @poisson said:

    @dahartigan said:

    @poisson said:
    The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.

    I must qualify that benches are benches, but there are some situations where it may be a concern. For example, you have a busy website that lots of lot of reading and writing to a database per second. These numbers could impact your site's performance negatively in such a case. For personal use, probably you won't notice a thing.

    It could also be that the numbers reflect a full node that is being heavily used, and in that case, the numbers are probably acceptable. However, in such a case, I worry about the NVMe drive wearing out much more quickly than expected because there are obviously some people hammering the disks. I just hope people are not doing, ahem, Linux ISOs on the disks.

    I really hope nobody's crazy enough to be torrenting on these, it's almost certainly not going to end well if that's happening lol

    You're right though, when the drive was stress tested it buckled under pressure, but seems fine otherwise. I don't notice anything wrong as such, everything is fast and the problems mentioned by Daniel, specifically the apt updates taking forever, just don't exist for me. Maybe if I really hammer the drive I could find a problem, but if I'm hammering the drive, I feel like I'm the problem. Same goes for CPU, network, etc imo :-)

    It's kinda like trying to redline my car whenever I drive, something is bound to shake loose.

    Heh.. I have stress tested repeatedly a couple of other servers and they were fine. Not to say that Flow is bad because I think the disk performance is actually very good for most real world cases, but there are better choices for NVMe performance if that is a mission critical factor. Plus, there are a ton of other factors to consider, e.g. support, location etc, which Flow passes with flying colours.

    Well put sir, you certainly are the man when it comes to the certainty of statistical probabilities :-) Your research methodology is something I would never feel qualified to question! :-)

    Channeling my inner @uptime..

    Maybe sir @trewq of qwert can share a few words on the nvme situation, it's not impossible that he's aware of this or is some known limitation (or unknown if that's how the potassium leans) that could explain or at least contribute to the pondering of some potassium enthusiasts gathered in their place of worship. Margarine hat.

    I did extensive testing with the NVMe drives before putting them in production and compared them to other providers in the space. Unfortunately I couldn't match the results from other providers while still keeping a level of redundancy I am satisfied with but I think the current levels are more than acceptable for a production system.

    The issue @Daniel15 is talking about was due to pushing the E3 nodes too hard and it took me longer than I'd like to admit to find the issue. As sometimes is the case bugs that don't show up in testing do in production. Mostly sorted now and moving to higher density nodes so issues like this won't happen again.

    Thank you everyone who's enjoying this offer, means a lot!

    Thanked by 2poisson dahartigan
  • @trewq said:

    @dahartigan said:

    @poisson said:

    @dahartigan said:

    @poisson said:

    @dahartigan said:

    @poisson said:
    The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.

    The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.

    Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.

    I must qualify that benches are benches, but there are some situations where it may be a concern. For example, you have a busy website that lots of lot of reading and writing to a database per second. These numbers could impact your site's performance negatively in such a case. For personal use, probably you won't notice a thing.

    It could also be that the numbers reflect a full node that is being heavily used, and in that case, the numbers are probably acceptable. However, in such a case, I worry about the NVMe drive wearing out much more quickly than expected because there are obviously some people hammering the disks. I just hope people are not doing, ahem, Linux ISOs on the disks.

    I really hope nobody's crazy enough to be torrenting on these, it's almost certainly not going to end well if that's happening lol

    You're right though, when the drive was stress tested it buckled under pressure, but seems fine otherwise. I don't notice anything wrong as such, everything is fast and the problems mentioned by Daniel, specifically the apt updates taking forever, just don't exist for me. Maybe if I really hammer the drive I could find a problem, but if I'm hammering the drive, I feel like I'm the problem. Same goes for CPU, network, etc imo :-)

    It's kinda like trying to redline my car whenever I drive, something is bound to shake loose.

    Heh.. I have stress tested repeatedly a couple of other servers and they were fine. Not to say that Flow is bad because I think the disk performance is actually very good for most real world cases, but there are better choices for NVMe performance if that is a mission critical factor. Plus, there are a ton of other factors to consider, e.g. support, location etc, which Flow passes with flying colours.

    Well put sir, you certainly are the man when it comes to the certainty of statistical probabilities :-) Your research methodology is something I would never feel qualified to question! :-)

    Channeling my inner @uptime..

    Maybe sir @trewq of qwert can share a few words on the nvme situation, it's not impossible that he's aware of this or is some known limitation (or unknown if that's how the potassium leans) that could explain or at least contribute to the pondering of some potassium enthusiasts gathered in their place of worship. Margarine hat.

    I did extensive testing with the NVMe drives before putting them in production and compared them to other providers in the space. Unfortunately I couldn't match the results from other providers while still keeping a level of redundancy I am satisfied with but I think the current levels are more than acceptable for a production system.

    The issue @Daniel15 is talking about was due to pushing the E3 nodes too hard and it took me longer than I'd like to admit to find the issue. As sometimes is the case bugs that don't show up in testing do in production. Mostly sorted now and moving to higher density nodes so issues like this won't happen again.

    Thank you everyone who's enjoying this offer, means a lot!

    Thanks for the explanation and thanks again for the offer, it's purring along beautifully for me and it's far from idling lol :-)

    Thanked by 1poisson

    Purveyor of high quality potassium

  • Daniel15Daniel15 Member
    edited January 4

    poisson said: For example, you have a busy website that lots of lot of reading and writing to a database per second.

    In my case I have a site that can spike to a few hundred requests per second during peak days of the year. Backend handles it fine (I've load tested to ~4k RPS on one server) but DB reads (in cases where the data isn't cached yet) and writing access/error logs to disk can slow things down if disk I/O is slow. NVME drives help a lot there :)

    Anyways, it sounds like it's not an issue any more (thanks for confirming @trewq), which is great! I might try FlowVPS again once my current Australian VPS plan expires.

  • Just noticed us lucky preorderers got a month bonus as per the email.

    You beauty.

    Thanked by 2poisson dahartigan
  • JordJord Moderator, Provider

    That's because @trewq is prem.

    Thanked by 2dahartigan tester4

    BaseServ Ltd - UK Shared DirectAdmin Hosting | Litespeed + Cloudlinux + Free Backups + Free Transfers.
    BaseServ Certified to ISO/IEC 27001:2013

  • @dahartigan said:

    @sonic said:

    @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    My advice would be to install rclone first, then set up onedrive with it. Once you're comfortable with that, install Plex, then after that combine them. Baby steps :-)

    I don't have a guide ready to go, but it's something I will write up and post here when I get the chance.

    As for optimizing Plex, the biggest impact comes from transcoding - if your clients can direct play, you can in most cases force it. If transcoding is needed (which is normal) then limiting the source files to lower bitrates/qualities can help, after that it's fine tuning.

    If you can get rclone set up and install plex, the rest will come as needed. I honestly hope that helps you :-)

    Would love the guide :smile:

    LET Apocalypse Club !

  • @sagarvai said:

    @dahartigan said:

    @sonic said:

    @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    My advice would be to install rclone first, then set up onedrive with it. Once you're comfortable with that, install Plex, then after that combine them. Baby steps :-)

    I don't have a guide ready to go, but it's something I will write up and post here when I get the chance.

    As for optimizing Plex, the biggest impact comes from transcoding - if your clients can direct play, you can in most cases force it. If transcoding is needed (which is normal) then limiting the source files to lower bitrates/qualities can help, after that it's fine tuning.

    If you can get rclone set up and install plex, the rest will come as needed. I honestly hope that helps you :-)

    Would love the guide :smile:

    Which OS are you using and are you using google drive?

    Purveyor of high quality potassium

  • @dahartigan said:

    @sagarvai said:

    @dahartigan said:

    @sonic said:

    @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    My advice would be to install rclone first, then set up onedrive with it. Once you're comfortable with that, install Plex, then after that combine them. Baby steps :-)

    I don't have a guide ready to go, but it's something I will write up and post here when I get the chance.

    As for optimizing Plex, the biggest impact comes from transcoding - if your clients can direct play, you can in most cases force it. If transcoding is needed (which is normal) then limiting the source files to lower bitrates/qualities can help, after that it's fine tuning.

    If you can get rclone set up and install plex, the rest will come as needed. I honestly hope that helps you :-)

    Would love the guide :smile:

    Which OS are you using and are you using google drive?

    Ubuntu and Yes I'm using Google Drive

    LET Apocalypse Club !

  • @sagarvai said:

    @dahartigan said:

    @sagarvai said:

    @dahartigan said:

    @sonic said:

    @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    My advice would be to install rclone first, then set up onedrive with it. Once you're comfortable with that, install Plex, then after that combine them. Baby steps :-)

    I don't have a guide ready to go, but it's something I will write up and post here when I get the chance.

    As for optimizing Plex, the biggest impact comes from transcoding - if your clients can direct play, you can in most cases force it. If transcoding is needed (which is normal) then limiting the source files to lower bitrates/qualities can help, after that it's fine tuning.

    If you can get rclone set up and install plex, the rest will come as needed. I honestly hope that helps you :-)

    Would love the guide :smile:

    Which OS are you using and are you using google drive?

    Ubuntu and Yes I'm using Google Drive

    Awesome. Have you got Plex installed yet?

    Purveyor of high quality potassium

  • @dahartigan said:

    @sagarvai said:

    @dahartigan said:

    @sagarvai said:

    @dahartigan said:

    @sonic said:

    @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    My advice would be to install rclone first, then set up onedrive with it. Once you're comfortable with that, install Plex, then after that combine them. Baby steps :-)

    I don't have a guide ready to go, but it's something I will write up and post here when I get the chance.

    As for optimizing Plex, the biggest impact comes from transcoding - if your clients can direct play, you can in most cases force it. If transcoding is needed (which is normal) then limiting the source files to lower bitrates/qualities can help, after that it's fine tuning.

    If you can get rclone set up and install plex, the rest will come as needed. I honestly hope that helps you :-)

    Would love the guide :smile:

    Which OS are you using and are you using google drive?

    Ubuntu and Yes I'm using Google Drive

    Awesome. Have you got Plex installed yet?

    Yes i have, do you maybe want to continue it in pm ?

    LET Apocalypse Club !

  • @sagarvai said:

    @dahartigan said:

    @sagarvai said:

    @dahartigan said:

    @sagarvai said:

    @dahartigan said:

    @sonic said:

    @dahartigan said:

    @sonic said:
    @trewq is sooooo friendly, his support is top notch! I submited a ticket to ask for his recommendation and set it in low priority because I know it's new year holiday, everyone is busy atm. BUT, I got my ticket solved in few minutes!

    VPS is running smoothly, network is prem, support is top notch!

    I will setup Plex server on this VPS (never used Plex before). @dahartigan do you have idea to use Onedrive 1TB account with Plex since I dont have Unlimited Gdrive :D

    If rclone supports onedrive then it should work :-) It basically mounts cloud storage as a directory, for example /mnt/onedrive and then you point plex to that "folder".

    100% agreed with everyone about the support, he really does work hard to have great customer service with a personal touch.

    It looks like it’s supported https://rclone.org/onedrive/
    Do you have guide to setup and optimize Plex server or just apt install and mount.

    My advice would be to install rclone first, then set up onedrive with it. Once you're comfortable with that, install Plex, then after that combine them. Baby steps :-)

    I don't have a guide ready to go, but it's something I will write up and post here when I get the chance.

    As for optimizing Plex, the biggest impact comes from transcoding - if your clients can direct play, you can in most cases force it. If transcoding is needed (which is normal) then limiting the source files to lower bitrates/qualities can help, after that it's fine tuning.

    If you can get rclone set up and install plex, the rest will come as needed. I honestly hope that helps you :-)

    Would love the guide :smile:

    Which OS are you using and are you using google drive?

    Ubuntu and Yes I'm using Google Drive

    Awesome. Have you got Plex installed yet?

    Yes i have, do you maybe want to continue it in pm ?

    Install rclone https://rclone.org/drive/ and connect it to your gdrive, then add that as a library in Plex.

    PM me if you prefer

    Purveyor of high quality potassium

Sign In or Register to comment.