Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


VirMach Ryzen Alpha/Beta Test Results and Comments
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

VirMach Ryzen Alpha/Beta Test Results and Comments

FrankZFrankZ Veteran
edited October 2021 in Providers

Please post your VirMach Ryzen Alpha and Beta test results and comments here instead of in VirMach offer threads.

«1

Comments

  • @VirMach said: This might actually be part of the reason the networking tends to be poor when it's going from Phoenix to somewhere further away.

    DNS request times from various locations to the server in PHX.

    Location              Max    Min     Ave   Current  Up Time
    Sydney, AU           0.164  0.153   0.154   0.154   100.00%
    San Jose, CA         0.029  0.017   0.019   0.017   100.00%
    Los Angeles, CA      0.020  0.010   0.012   0.012   100.00% 
    Dallas, TX           0.038  0.031   0.031   0.031   100.00%
    Houston, TX          0.027  0.026   0.026   0.027   100.00%     
    Atlanta, GA          0.103  0.042   0.045   0.043   100.00%
    Chicago, IL          0.050  0.048   0.048   0.049   100.00%
    Secaucus, NJ         0.057  0.055   0.056   0.056   100.00% 
    Norway               0.162  0.151   0.156   0.156   100.00%
    Droden, NL           0.153  0.143   0.146   0.148   100.00%     
    Milan, IT            0.163  0.145   0.155   0.146   100.00% 
    Madrid, SP           0.135  0.124   0.129   0.132   100.00%
    

    For comparison

    DNS request times from various locations to my best LAX server

    Location              Max    Min     Ave   Current  Up Time
    Sydney, AU          0.153   0.145   0.145   0.145   100.00% 
    San Jose, CA        0.015   0.008   0.009   0.009   100.00% 
    Los Angeles, CA     0.002   0.001   0.001   0.001   100.00% 
    Dallas, TX          0.045   0.035   0.036   0.037   100.00%
    Houston, TX         0.043   0.035   0.037   0.038   100.00%
    Atlanta, GA         0.054   0.046   0.047   0.048   100.00%
    Chicago, IL         0.048   0.039   0.042   0.042   100.00%
    Secaucus, NJ        0.080   0.067   0.070   0.071   100.00%
    Norway              0.170   0.157   0.161   0.158   100.00%
    Droden, NL          0.141   0.133   0.134   0.133   100.00%
    Milan, IT           0.160   0.145   0.150   0.153   100.00%
    Madrid, SP          0.316   0.139   0.151   0.149   100.00%
    
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    YABS!

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2021-06-05                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Wed Sep 29 05:11:39 EDT 2021
    
    Basic System Information:
    ---------------------------------
    Processor  : AMD Ryzen 9 3900X 12-Core Processor
    CPU cores  : 3 @ 3799.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 3.9 GiB
    Swap       : 256.0 MiB
    Disk       : 98.2 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 348.19 MB/s  (87.0k) | 581.49 MB/s   (9.0k)
    Write      | 349.11 MB/s  (87.2k) | 584.55 MB/s   (9.1k)
    Total      | 697.30 MB/s (174.3k) | 1.16 GB/s    (18.2k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 688.30 MB/s   (1.3k) | 722.00 MB/s    (705)
    Write      | 724.87 MB/s   (1.4k) | 770.09 MB/s    (752)
    Total      | 1.41 GB/s     (2.7k) | 1.49 GB/s     (1.4k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed
                    |                           |                 |
    Clouvider       | London, UK (10G)          | 774 Mbits/sec   | 173 Mbits/sec
    Online.net      | Paris, FR (10G)           | 832 Mbits/sec   | 193 Mbits/sec
    WorldStream     | The Netherlands (10G)     | 782 Mbits/sec   | 174 Mbits/sec
    Biznet          | Jakarta, Indonesia (1G)   | busy            | busy
    Clouvider       | NYC, NY, US (10G)         | 877 Mbits/sec   | 525 Mbits/sec
    Velocity Online | Tallahassee, FL, US (10G) | 898 Mbits/sec   | 582 Mbits/sec
    Clouvider       | Los Angeles, CA, US (10G) | 940 Mbits/sec   | 941 Mbits/sec
    Iveloz Telecom  | Sao Paulo, BR (2G)        | 733 Mbits/sec   | 193 Mbits/sec
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1368
    Multi Core      | 3643
    Full Test       | https://browser.geekbench.com/v5/cpu/10136897
    
  • This is my CPU & RAM benchmark scoring program run on VirMach and two other providers using the same processor and a single core.
    The memory access speed at VirMach really stands out.

    VirMach Ryzen 9 3900X
    CPU: 297516
    RAM: 688280
    -----------------------------------

    Other provider A Ryzen 9 3900X
    CPU: 241249
    RAM: 237997
    -----------------------------------

    Other provider B Ryzen 9 3900X
    CPU: 233819
    RAM: 228040
    -----------------------------------

    Thanked by 1FAT32
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire
    edited October 2021

    vpsbench 2.2.0*

    This tool is made by our Resident Benchmarker. Credits to jsg

    Note: Version 2.2.0 is before the I/O improvements and is potentially outdated, but I only have access to 2.2.0

    Machine: amd64, Arch.: x86_64, Model: AMD Ryzen 9 3900X 12-Core Processor
    OS, version: Linux 3.16.0, Mem.: 3.893 GB
    CPU - Cores: 3, Family/Model/Stepping: 23/113/0
    Cache: 64K/64K L1d/L1i, 512K L2, 16M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh mmx fxsr sse sse2 sse3 pclmulqdq ssse3 fma cx16 sse4_1
              sse4_2 popcnt aes xsave osxsave avx f16c rdrnd hypervisor
    Ext. Flags: syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm lahf_lm cmp_legacy
              cr8_legacy lzcnt sse4a misalignsse 3dnowprefetch osvw perfctr_core
    
    [T] 2021-10-01T11:39:43Z
    ----- Processor and Memory -----
    [PM-SC] 471.02 MB/s (testSize: 2.0 GB)
    [PM-MA] 922.94 MB/s (testSize: 2.0 GB)
    [PM-MB] 1016.04 MB/s    (testSize: 6.0 GB)
    ----- Disk -----
    [D] Total size per test = 2048.00 MB, Mode: Buf'd
    [D] Wr Seq:  2508.21 MB/s
    [D] Wr Rnd:  8170.66 MB/s
    [D] Rd Seq:  9796.84 MB/s
    [D] Rd Rnd: 11089.45 MB/s
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq:   800.26 MB/s
    [D] Wr Rnd:  1112.96 MB/s
    [D] Rd Seq:  9741.20 MB/s
    [D] Rd Rnd: 10984.18 MB/s
    ----- Network -----
    [N] speedtest.lon02.softlayer.com   UK LON:, P: 139.4 ms WP: 139.4 ms, DL:   90.60 Mb/s
    [N] mirror.nl.leaseweb.net          NL AMS:, P: 135.0 ms WP: 135.0 ms, DL:   87.17 Mb/s
    [N] speedtest.c1.mel1.dediserve.com AU MEL:, P: 134.7 ms WP: 180.3 ms, DL:   62.75 Mb/s
    [N] speedtest.che01.softlayer.com   IN CHN:, P: 168.0 ms WP: 253.4 ms, DL:   47.29 Mb/s
    [N] mirror.sg.leaseweb.net          SG SGP:, P: 223.7 ms WP: 223.7 ms, DL:   49.17 Mb/s
    [N] fra.lg.core-backbone.com        DE FRA:, P:  88.9 ms WP: 149.5 ms, DL:   77.89 Mb/s
    [N] speedtest.mil01.softlayer.com   IT MIL:, P: 172.8 ms WP: 173.3 ms, DL:   62.31 Mb/s
    [N] speedtest.par01.softlayer.com   FR PAR:, P: 160.5 ms WP: 160.8 ms, DL:   73.39 Mb/s
    [N] speedtest.hostkey.ru            RU MOS:, P: 182.6 ms WP: 182.6 ms, DL:   63.97 Mb/s
    [N] speedtest.sao01.softlayer.com   BR SAO:, P: 111.7 ms WP: 167.5 ms, DL:   69.46 Mb/s
    [N] speedtest.dal05.softlayer.com   US DAL:, P:  32.9 ms WP:  32.9 ms, DL:  342.60 Mb/s
    [N] speedtest.sjc01.softlayer.com   US SJC:, P:  18.9 ms WP:  19.7 ms, DL:  490.04 Mb/s
    [N] lax.download.datapacket.com     US LAX:, P:  10.9 ms WP:  11.8 ms, DL:  822.75 Mb/s
    [N] mirror.wdc1.us.leaseweb.net     US WDC:, P:  61.4 ms WP:  61.6 ms, DL:  180.58 Mb/s
    [N] nyc.download.datapacket.com     US NYC:, P:  56.0 ms WP:  56.2 ms, DL:  342.96 Mb/s
    [N] speedtest.tokyo2.linode.com     JP TOK:, P: 115.2 ms WP: 115.4 ms, DL:  105.44 Mb/s
    [N] 185.183.99.8                    RO BUC:, P: 190.2 ms WP: 190.5 ms, DL:   97.54 Mb/s
    [N] speedtest.ftp.otenet.gr         GR UNK:, P:   0.0 ms WP:   0.0 ms, DL:    0.00 Mb/s
    [N] 185.65.204.169                  TR UNK:, P: 188.3 ms WP: 188.3 ms, DL:   74.62 Mb/s
    [N] speedtest.osl01.softlayer.com   NO OSL:, P: 155.3 ms WP: 155.3 ms, DL:   69.70 Mb/s
    [N] mirror.hk.leaseweb.net          CN HK:, P: 164.8 ms WP: 165.2 ms, DL:   70.55 Mb/s
    
    Thanked by 1FrankZ
  • @FAT32 said:
    ----- Disk -----
    [D] Total size per test = 2048.00 MB, Mode: Buf'd
    [D] Wr Seq: 2508.21 MB/s
    [D] Wr Rnd: 8170.66 MB/s
    [D] Rd Seq: 9796.84 MB/s
    [D] Rd Rnd: 11089.45 MB/s
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq: 800.26 MB/s
    [D] Wr Rnd: 1112.96 MB/s
    [D] Rd Seq: 9741.20 MB/s
    [D] Rd Rnd: 10984.18 MB/s

    reported 11GB/s while current generation hardware (NVME PCI-e Gen4 x4 drives) has PHYSICAL limit of 8GB/s

    it has been repeatedly proven on this forum that this pseudo "benchmark" is FUNDAMENTALLY BROKEN

    and last provider who worked closely with this "benchmarker" soon got whole storage system INVOLUCRATED and went out of bussines because of this

    coincidence? maybe not:D

    so what is the purpose of using this crap?

  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @Andrews said:
    so what is the purpose of using this crap?

    Just want to try it out as different people might have different ways to measure performance

  • Even if vpsbench wasn't complete shit and proven incorrect. The dethroned King has taken his reputation and the scripts reputation down as a joke...

  • Feedback for node: ...Z002

    Up to date VNC didn't work/connect ever (neither client area, nor solus).

    First tested all the client area templates, from bottom upwards until I got one to work:

    (reboot, reconfigure network, shutdown, turn on, etc. had no effect)

    1. FAILED - Linux Ubuntu 18.04 Server X86 64 Custom Desktop Gen2 V1
      ping = YES ; VNC=NO ; SSH = NO

    2. FAILED - Linux Ubuntu 16.04 Server X86 64 Minimal Latest
      ping = NO ; VNC=NO ; SSH=NO

    3. FAILED - Linux Ubuntu 16.04 Server X86 64 Custom Desktop Gen2 V1
      ping = YES ; VNC=NO ; SSH=NO

    4. SUCCESS - Linux Ubuntu 14.04 Server X86 64 Min Gen2 V1
      ping = YES ; VNC=NO ; SSH=YES

    Updated and upgraded Ubuntu 14.04 up to Ubuntu 20.04.3 LTS

    The first YABS results:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2021-06-05                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Fri 01 Oct 2021 03:55:59 PM BST
    
    Basic System Information:
    ---------------------------------
    Processor  : AMD Ryzen 9 3900X 12-Core Processor
    CPU cores  : 3 @ 3799.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 3.8 GiB
    Swap       : 256.0 MiB
    Disk       : 98.2 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 347.96 MB/s  (86.9k) | 590.86 MB/s   (9.2k)
    Write      | 348.88 MB/s  (87.2k) | 593.97 MB/s   (9.2k)
    Total      | 696.84 MB/s (174.2k) | 1.18 GB/s    (18.5k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 687.37 MB/s   (1.3k) | 732.16 MB/s    (715)
    Write      | 723.89 MB/s   (1.4k) | 780.92 MB/s    (762)
    Total      | 1.41 GB/s     (2.7k) | 1.51 GB/s     (1.4k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed
                    |                           |                 |
    Clouvider       | London, UK (10G)          | 736 Mbits/sec   | 208 Mbits/sec
    Online.net      | Paris, FR (10G)           | busy            | 9.74 Mbits/sec
    WorldStream     | The Netherlands (10G)     | 716 Mbits/sec   | 208 Mbits/sec
    Biznet          | Jakarta, Indonesia (1G)   | busy            | 86.6 Mbits/sec
    Clouvider       | NYC, NY, US (10G)         | 804 Mbits/sec   | 485 Mbits/sec
    Velocity Online | Tallahassee, FL, US (10G) | 829 Mbits/sec   | 656 Mbits/sec
    Clouvider       | Los Angeles, CA, US (10G) | 938 Mbits/sec   | 934 Mbits/sec
    Iveloz Telecom  | Sao Paulo, BR (2G)        | 401 Mbits/sec   | 214 Mbits/sec
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1340
    Multi Core      | 3737
    Full Test       | https://browser.geekbench.com/v5/cpu/10180228
    
    Thanked by 1FrankZ
  • Speedtest monster results for Phoenix ...Z002

    ---------------------------------------------------------------------------
     OS           : Ubuntu 20.04.3 LTS (64 Bit)
     Virt/Kernel  : KVM / 5.4.0-88-generic
     CPU Model    : AMD Ryzen 9 3900X 12-Core Processor
     CPU Cores    : 3 @ 3799.998 MHz x86_64 512 KB Cache
     CPU Flags    : AES-NI Enabled & VM-x/AMD-V Disabled
     Load Average : 0.01, 0.15, 0.12
     Total Space  : 99G (2.6G ~3% used)
     Total RAM    : 3935 MB (98 MB + 425 MB Buff in use)
     Total SWAP   : 255 MB (0 MB in use)
     Uptime       : 0 days 1:11
    ---------------------------------------------------------------------------
     ASN & ISP    : AS35913, Internap Holding LLC
     Organization :
     Location     : New York, United States / US
     Region       : New York
    ---------------------------------------------------------------------------
    
     ## Geekbench v4 CPU Benchmark:
    
      Single Core : 5988  (EXCELLENT)
       Multi Core : 14960
    
     ## IO Test
    
     CPU Speed:
        bzip2     : 160 MB/s
       sha256     : 295 MB/s
       md5sum     : 698 MB/s
    
     RAM Speed:
       Avg. write : 3549.9 MB/s
       Avg. read  : 11468.8 MB/s
    
     Disk Speed:
       1st run    : 1.2 GB/s
       2nd run    : 1.2 GB/s
       3rd run    : 1.4 GB/s
       -----------------------
       Average    : 1297.1 MB/s
    
     ## Global Speedtest.net
    
     Location                       Upload           Download         Ping
    ---------------------------------------------------------------------------
     Nearby                         349.29 Mbit/s    330.54 Mbit/s   * 53.481 ms
    ---------------------------------------------------------------------------
     USA, New York (Optimum)        277.55 Mbit/s    136.66 Mbit/s    64.137 ms
     USA, Chicago (Windstream)      276.24 Mbit/s    305.17 Mbit/s    49.775 ms
     USA, Dallas (Frontier)         438.37 Mbit/s    517.06 Mbit/s    33.497 ms
     USA, Miami (Sprint)            305.84 Mbit/s    309.95 Mbit/s    57.230 ms
     USA, Los Angeles (Windstream)  372.12 Mbit/s    357.29 Mbit/s    42.994 ms
     UK, London (toob Ltd)          59.84 Mbit/s     121.42 Mbit/s   ping error!
     France, Lyon (SFR)             113.47 Mbit/s    138.94 Mbit/s   135.806 ms
     Germany, Berlin (DNS:NET)      137.97 Mbit/s    150.62 Mbit/s   146.045 ms
     Spain, Madrid (MasMovil)       149.39 Mbit/s    179.65 Mbit/s   134.435 ms
     Italy, Rome (Unidata)          115.95 Mbit/s    81.47 Mbit/s    171.881 ms
     Russia, Moscow (Rostelecom)    114.07 Mbit/s    110.40 Mbit/s   170.579 ms
     Israel, Haifa (013Netvision)   26.21 Mbit/s     98.32 Mbit/s    203.614 ms
     India, New Delhi (Weebo)       0.31 Mbit/s      27.23 Mbit/s    274.231 ms
     Singapore (FirstMedia)         65.40 Mbit/s     37.99 Mbit/s    179.308 ms
     Japan, Tsukuba (SoftEther)     28.81 Mbit/s     156.16 Mbit/s   127.363 ms
     Australia, Sydney (Optus)      119.20 Mbit/s    152.17 Mbit/s   173.839 ms
     RSA, Randburg (Cool Ideas)     20.30 Mbit/s     13.55 Mbit/s    298.367 ms
     Brazil, Sao Paulo (Criare)     39.13 Mbit/s     131.07 Mbit/s   179.223 ms
    ---------------------------------------------------------------------------
    
     Finished in : 10 min 16 sec
     Timestamp   : 2021-10-01 16:09:03 GMT
     Saved in    : /root/speedtest.log
    
     Share results:
     - https://www.speedtest.net/result/12120810624.png
     - https://browser.geekbench.com/v4/cpu/16365881
     - https://clbin.com/2otTU
    
    Thanked by 1FrankZ
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @ZA_capetown said:
    Feedback for node: ...Z002

    Up to date VNC didn't work/connect ever (neither client area, nor solus).

    First tested all the client area templates, from bottom upwards until I got one to work:

    (reboot, reconfigure network, shutdown, turn on, etc. had no effect)

    That's weird... Ubuntu 18 Desktop works for me without issues, I am on Z002 node as well.

    Debian 8 works as well, but Debian 9 is showing some kernel panic / errors and I can't boot up

  • JabJabJabJab Member
    edited October 2021

    @FAT32 said: Debian 8 works as well, but Debian 9 is showing some kernel panic / errors and I can't boot up

    That's weird... Debian 9 works for me without issues, I am on Z002 node as well.
    Even updated that to Debian 10 and it's hammering down CPU and little of disk again.

    Totally not a copy-paste of your sentence, but Debian 9 64bit Minimal works ALMOST* like a charm here. Reinstalled (more like first install) from control panel rather than whmcs, because fuck that piece of software. I would suspect many of those "this is working, this is not working" is somehow related to issuing more than one reinstall and this thing goes nuts - shows X, still updates/init Y, kernel panics in mean time.

      • okey, the noVNC says it's connected, but not rendering anything. Server is up, ssh working, CPU is working :D I would assume problem is the websocket is not connecting and hates me? https://i.imgur.com/rc0tfnO.png
        Uhmmm,
    $ ping ryze.phx-z002.vms.[virmach_strange_domain]
    PING pixie.porkbun.com (44.227.65.245) 56(84) bytes of data.
    

    Yeah, I end up on porkbun landing page... it this somehow DNS shit or I have different configuration than rest - do you have the same domain in websocket request? :D

      • hmmm VNC Viewer connected and it's showing stuff, let me see what does noVNC does and maybe if it logs anything in browser console.
    Thanked by 1FAT32
  • @Andrews said:

    @FAT32 said:
    ----- Disk -----
    [D] Total size per test = 2048.00 MB, Mode: Buf'd
    [D] Wr Seq: 2508.21 MB/s
    [D] Wr Rnd: 8170.66 MB/s
    [D] Rd Seq: 9796.84 MB/s
    [D] Rd Rnd: 11089.45 MB/s
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq: 800.26 MB/s
    [D] Wr Rnd: 1112.96 MB/s
    [D] Rd Seq: 9741.20 MB/s
    [D] Rd Rnd: 10984.18 MB/s

    reported 11GB/s while current generation hardware (NVME PCI-e Gen4 x4 drives) has PHYSICAL limit of 8GB/s

    it has been repeatedly proven on this forum that this pseudo "benchmark" is FUNDAMENTALLY BROKEN

    and last provider who worked closely with this "benchmarker" soon got whole storage system INVOLUCRATED and went out of bussines because of this

    coincidence? maybe not:D

    so what is the purpose of using this crap?

    To me, this looks closer to the actual performance of VPS

    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 348.19 MB/s  (87.0k) | 581.49 MB/s   (9.0k)
    Write      | 349.11 MB/s  (87.2k) | 584.55 MB/s   (9.1k)
    Total      | 697.30 MB/s (174.3k) | 1.16 GB/s    (18.2k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 688.30 MB/s   (1.3k) | 722.00 MB/s    (705)
    Write      | 724.87 MB/s   (1.4k) | 770.09 MB/s    (752)
    Total      | 1.41 GB/s     (2.7k) | 1.49 GB/s     (1.4k)
    
  • TimboJonesTimboJones Member
    edited October 2021

    @Andrews said:

    @FAT32 said:
    ----- Disk -----
    [D] Total size per test = 2048.00 MB, Mode: Buf'd
    [D] Wr Seq: 2508.21 MB/s
    [D] Wr Rnd: 8170.66 MB/s
    [D] Rd Seq: 9796.84 MB/s
    [D] Rd Rnd: 11089.45 MB/s
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq: 800.26 MB/s
    [D] Wr Rnd: 1112.96 MB/s
    [D] Rd Seq: 9741.20 MB/s
    [D] Rd Rnd: 10984.18 MB/s

    reported 11GB/s while current generation hardware (NVME PCI-e Gen4 x4 drives) has PHYSICAL limit of 8GB/s

    Keep in mind that these drives are in RAID and performance will be some multiple of single disk performance.

    For example, a threadripper motherboard with bifurcation cards can do onboard RAID with something like 11+ NVMe drives. These have PCIe lanes direct to the CPU so not limited to one x4 lane for all drives. I might do that with all the 256GB NVMe's I got laying around.

    Virmach said previously they have a hardware RAID card (unless I misunderstood) and are probably PCIe x8 lane cards.

    So testing on bare metal with single drives is how to conclusively call shenanigans.

  • FrankZFrankZ Veteran
    edited October 2021

    Graphed DNS lookup times over 24 hours from various locations to VirMach Phoenix.
    (IMHO better then a ping/mtr for determining network reliability)



    Thanked by 1FAT32
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    I think we should try to add more load to the server... currently it is harder to tell whether the improvement is because the load is empty or because it is well configured

    Thanked by 1FrankZ
  • FrankZFrankZ Veteran
    edited October 2021

    @FAT32 said: I think we should try to add more load to the server... currently it is harder to tell whether the improvement is because the load is empty or because it is well configured

    Currently running 2 cores at 100% and graphing 4.5 MB https image loads from 12 locations over the next 24 hours. Will post that tomorrow.

    I agree that there is not enough load as I am getting 0% steal and 0 to .1% wait state with a little over 2 load.

    Thanked by 1FAT32
  • @TimboJones said:
    Keep in mind that these drives are in RAID and performance will be some multiple of single disk performance.

    For example, a threadripper motherboard with bifurcation cards can do onboard RAID with something like 11+ NVMe drives. These have PCIe lanes direct to the CPU so not limited to one x4 lane for all drives. I might do that with all the 256GB NVMe's I got laying around.

    Virmach said previously they have a hardware RAID card (unless I misunderstood) and are probably PCIe x8 lane cards.

    So testing on bare metal with single drives is how to conclusively call shenanigans.

    1.
    these were results for specific machine with specific CPU: desktop class AMD Ryzen 9 3900X
    not hypothetical Threadripper

    2.
    this CPU have 24 of PCI-e 4.0 lanes
    24, not 64/128 like Threadrippers have

    3.
    we are on LET, and there are LOW END shared VPSes

    so summing up, there is no need nor reason to make wet phantasies that theoretically VirMach could have Threadripper, put 11+ drives (consuming 44+ PCI-e lanes... whitch desktop Ryzen does not have) and set on them RAID array (some performance mode one, with multiply performance effect not data safety/redundancy) and sell in on LET for few bucks. wakey wakey:D vpsbench results are worthless horse shit

    btw. in most real usage scenarios, raw PEEK sequential read/write rates are not so crucial but rather IOPSes are the king

  • @FrankZ said:

    @FAT32 said: I think we should try to add more load to the server... currently it is harder to tell whether the improvement is because the load is empty or because it is well configured

    Currently running 2 cores at 100% and graphing 4.5 MB https image loads from 12 locations over the next 24 hours. Will post that tomorrow.

    I agree that there is not enough load as I am getting 0% steal and 0 to .1% wait state with a little over 2 load.

    Has Virmach said they're not throttling right now?

  • @Andrews said:

    @TimboJones said:
    Keep in mind that these drives are in RAID and performance will be some multiple of single disk performance.

    For example, a threadripper motherboard with bifurcation cards can do onboard RAID with something like 11+ NVMe drives. These have PCIe lanes direct to the CPU so not limited to one x4 lane for all drives. I might do that with all the 256GB NVMe's I got laying around.

    Virmach said previously they have a hardware RAID card (unless I misunderstood) and are probably PCIe x8 lane cards.

    So testing on bare metal with single drives is how to conclusively call shenanigans.

    1.
    these were results for specific machine with specific CPU: desktop class AMD Ryzen 9 3900X
    not hypothetical Threadripper

    2.
    this CPU have 24 of PCI-e 4.0 lanes
    24, not 64/128 like Threadrippers have

    My entire point was that it won't be single drive performance but a RAID and gave an example of what I had on hand. The motherboard Virmach has could have three M.2 connectors plus the PCIe slots for 4+ NVMe soft RAID.

    3.
    we are on LET, and there are LOW END shared VPSes

    This is on new Ryzen hardware where improved performance is expected, not ancient xeons worth $40. You're not wrong this is LET, but better performance is the goal here.

    so summing up, there is no need nor reason to make wet phantasies that theoretically VirMach could have Threadripper, put 11+ drives (consuming 44+ PCI-e lanes... whitch desktop Ryzen does not have) and set on them RAID array (some performance mode one, with multiply performance effect not data safety/redundancy) and sell in on LET for few bucks. wakey wakey:D

    You missed my point. You specifically said "(NVME PCI-e Gen4 x4 drives) has PHYSICAL limit of 8GB/s" , not taking into account that Virmach is using RAID.It's technically incorrect statement since that limit doesn't apply. Hardware RAID would be a 4.0 x16 slot and software would be PCIe3/4 x4 each for at least 3 drives.

    vpsbench results are worthless horse shit

    You're preaching to the choir. I know 100% the 11GB/s reported number is junk, my whole point was your reasoning .

    Thanked by 1skorous
  • @TimboJones said:
    My entire point was that it won't be single drive performance but a RAID and gave an example of what I had on hand. The motherboard Virmach has could have three M.2 connectors plus the PCIe slots for 4+ NVMe soft RAID.

    I've already wrote you, that it is desktop class CPU with only 24 PCI-e lanes
    so you could not have three M.2 slots and PCI-e x16 card
    (3 x 4 lanes = 12 lanes) + 16 lanes = 28 lanes
    28 lanes > 24 lanes
    it is simple math, how many times do I have to explain this?

    @TimboJones said:
    This is on new Ryzen hardware where improved performance is expected, not ancient xeons worth $40. You're not wrong this is LET, but better performance is the goal here.

    one more time: these VPS have at most 8GB/s, not 11GB/s
    and this 8GB/s is still ABSOLUTE GREAT PERFORMANCE, to be proud of
    definitely I could recommend these

    @TimboJones said:
    You missed my point. You specifically said "(NVME PCI-e Gen4 x4 drives) has PHYSICAL limit of 8GB/s" , not taking into account that Virmach is using RAID.It's technically incorrect statement since that limit doesn't apply. Hardware RAID would be a 4.0 x16 slot and software would be PCIe3/4 x4 each for at least 3 drives.

    if you would be more focus on this specific VPS (based on desktop class CPU) which we are talking about instead of fantasizing about HEDT class Threadripper CPUs, 11+ drives arrays etc., you would know what kind of RAID controllers VirMach is using (these are: LSI 9460-16i, LSI 9460-8i and Highpoint SSD7103 as he mentioned it)

    @TimboJones said:
    You're preaching to the choir. I know 100% the 11GB/s reported number is junk, my whole point was your reasoning .

    so summing up, you missed the points:
    -that it is desktop class CPU (not Threadripper),
    -have limit od 24 lanes at most (some of them are used for LAN, VGA...) not 28/64/128,
    -if every other tests show disk speeds up to mentioned limit of 8GB/s, then there is a no reason to suppose that there is some hidden magic RAID performance mode used, which effect (38% perf up, 8GB/s->11GB/s) is only detectable by pseudo "benchmark".

    in other words: you are arguing that bicycle could drive 1000mph if you attach jet engine, and issue is that in this case jet engine was not used and even can't be attached to this model of bicycle. it is just speedometer in this bicycle fundamentally broken:D

    EOT

  • FrankZFrankZ Veteran
    edited October 2021

    @FrankZ said:

    @FAT32 said: I think we should try to add more load to the server... currently it is harder to tell whether the improvement is because the load is empty or because it is well configured

    Currently running 2 cores at 100% and graphing 4.5 MB https image loads from 12 locations over the next 24 hours. Will post that tomorrow.

    I agree that there is not enough load as I am getting 0% steal and 0 to .1% wait state with a little over 2 load.

    @TimboJones said:Has Virmach said they're not throttling right now?

    @VirMach said:
    I'm going to see if we can sneak in our modified anti-abuse script at some point in time to see if we can get some false positives going. It'd be a good way of adjusting leniency, just telling everyone to go crazy with it and seeing how far we can let it go without burning down the node or having false positives.

    Thanked by 1TimboJones
  • @VirMach said: Here's some more information on the disks for everyone:

    X470 motherboards that will be used for 3900X and 3950X have only two PCIe Gen3 lanes. This means the maximum speed for any NVMe in these M.2 slots will be affected.
    We have NVMe SSDs ranging from 1900MB/s to 4900MB/s right now, with the majority around 3400MB/s.
    For the M.2 slots on the X470 motherboard, we only use a 2TB NVMe maximum size, and for 4TB NVMe we ensure at least four lanes on a riser card.
    Currently, it's configured to where people early to a node would end up on the smaller NVMe restricted to two lanes (and generally lower burst performance in the first place.)

    So for these current builds, it's both a blessing and a curse to get activated early if you're going after that burst speed. Real world, it doesn't negatively affect you much and once the node is full it may actually help out a little bit being on a smaller disk.

  • VirMachVirMach Member, Patron Provider
    edited October 2021

    @Andrews said: I've already wrote you, that it is desktop class CPU with only 24 PCI-e lanes
    so you could not have three M.2 slots and PCI-e x16 card
    (3 x 4 lanes = 12 lanes) + 16 lanes = 28 lanes
    28 lanes > 24 lanes
    it is simple math, how many times do I have to explain this?

    I want to chime in here, I haven't been following what you guys have been saying 100% so maybe it doesn't add anything or it's the wrong response, but:

    This is something that was tested extensively as I had some concerns regarding performance since we were originally going to do RAID10 controllers. The motherboard has one two lane PCIe 3.0 M.2 slot, one four lane PCIe 2.0 M.2 slot, one PCIe 3.0 x16, another x16 (that's actually 3.0 x8) that both act as x8 when both are plugged in. Then a final x8 that's actually labeled as 3.0x4 on the diagram.

    On X470, there are 24 CPU lanes, 4 of the lanes that get used to connect to the chipset, and then 16 chipset lanes. On X570, there are 24 CPU lanes and 20 chipset lanes. There's a total of 32 and 36 usable lanes, respectively, meaning if you burst all of them at the same time. I believe for X570, at least for the motherboards we would be using in the future, this extra 4 lanes translate to an extra M.2 slot, with the 4 lanes from the CPU being for one slot and the other one on the chipset with another 4 lanes instead of split between two.

    Conclusion, on the current X470 boards, we can do 4x4 lane PCIe 3.0 in bifurcation mode on an x16 slot for four 3.0 NVMe drives, leaving the second pretend x16 slot empty. Then, we could do an extra one 3.0 NVMe drives on the last x4/8 slot as long as we can physically get to it with a flexible riser. Finally, the two M.2 slots on the board but those would be at two lane speeds. That means a total of 7x M.2 NVMe can be used if we wanted.

    But at this point if you're still reading this, you might be asking, I thought you said total 32 usable and we haven't used them all, what happened? Well, we counted the 20 lanes that were left from the CPU (two PCIe 3.0 x8 and one PCIe 3.0 x4.)

    For this motherboard's chipset lanes, they decided to do 2x 3.0 for an M.2 slot, 4x 2.0 for the other, and did not use 4x 3.0 from the processor for the NVMe. Then 1x 2.0 is for two additional SATA3, 2x 2.0 for dual Gigabit, 1x 2.0 for the BMC. Then they just threw away the rest. Side note, there is a 10 Gigabit version of this motherboard that instead does 1x16 and 1x8 (that combined act as 2x8) on the CPU, and use the last 4 usable lanes from the CPU for the dual 10GbE. Then, for the chipset lanes, they do one lane for the third PCIe slot, one for the BMC, six for the M.2 slots, and waste even more lanes. I assume they just wanted to interchange some parts between the non-10GbE version so they didn't do a better layout which could have physically resulted in two possible full-speed 3.0 M.2 slots.

    @Andrews said: have limit od 24 lanes at most (some of them are used for LAN, VGA...) not 28/64/128,

    So, TLDR: it's actually limited to 20 lanes with 4 lanes used to communicate with the chipset, 20 lanes on PCIe slots, and the rest of the stuff (such as LAN, VGA, and even M.2) on this board use the chipset lanes.

    @TimboJones said: You're preaching to the choir. I know 100% the 11GB/s reported number is junk, my whole point was your reasoning

    I'm not sure how the benchmark works but that speed is theoretically possible in a real world situation, just not on that VPS. That VPS, to be very clear, is on the following NVMe specifically: https://www.amazon.com/Mushkin-Pilot-Encryption-Internal-MKNSSDPE1TB-D8/dp/B07RD1W6SB/ (and it is definitely throttled when it comes to maximum read performance.)

    It's important I link the specific SSD because I'm about to explain my theory on why the benchmark may be showing 11GB/s.

    The SSD uses the Silicon Motion SM2262EN specifically, which is actually the reason we went with these specifically. After a lot of testing, I found out that outside of Samsung's (very expensive at the time) Phoenix controller this was the best one we could get our hands on from a consistent supplier. Anyway, this controller has been known to be very very good in some ways when it starts out. Right now, it's just starting out. Once it fills up, this will no longer be the case, but as it currently sits mostly empty, it means it has very good read latency. It's almost as good as Intel's super expensive (and discontinued) Optane drives when it's empty. To be more specific, it is only 33% worse than Optane 900P when it comes to this specific figure at this specific moment in time.

    So my theory is this, and it may be wrong because I came up with it immediately without looking into basically anything any further: the benchmark may be attempting to correct/account for the average read latency or in some way factoring something in related to the read latency that makes it appear as if it faster than it really is. So while it may not be truly 11GB/s, it may performance as well as a 11GB/s drive would in certain situations, related to that very specific operation it is doing. If this is the case, that ironically would mean that this synthetic benchmark would more ideally work in a "real world" situation since the current situation we are in is not a real world situation and only possible because only a few people are using an entire drive and it's not even close to being full yet in any way.

    Thanked by 3FAT32 Andrews FrankZ
  • VirMachVirMach Member, Patron Provider

    Here's Geekbench 4 x86_64 for everyone by the way:

    https://browser.geekbench.com/v4/cpu/1636276

    We may start adding all the benchmarks for all servers we deploy here but so far I've put it for the first two, these are actually from before putting people on them.

    Thanked by 2FAT32 FrankZ
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @VirMach said:
    Here's Geekbench 4 x86_64 for everyone by the way:

    https://browser.geekbench.com/v4/cpu/1636276

    We may start adding all the benchmarks for all servers we deploy here but so far I've put it for the first two, these are actually from before putting people on them.

    This is a OnePlus ONEPLUS A3010 benchmark? Is that a server? Or... an easter egg?

    Thanked by 1pullangcubo
  • JabJabJabJab Member
    edited October 2021
    Thanked by 2FAT32 pullangcubo
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    @JabJab said:
    VirMach can't into copy-paste and forgot 1 at the end.

    Right... this is VirMach, always missing 1 character here and there

    Hope they forgot 1 character / digit from my bill too :joy:

  • VirMachVirMach Member, Patron Provider

    @JabJab said:
    https://browser.geekbench.com/v4/cpu/16362761
    https://browser.geekbench.com/v4/cpu/16362760

    VirMach can't into copy-paste and forgot 1 at the end.

    1... Plus

  • @VirMach thanks for details about your disks

    Mentioned Mushkin are old (2 years old) generation drives, with PCI-e Gen3 x4 interface. Gen3 x4 means that their physical limits are 4GB/s. They effective speeds are 3.5/3.0GBs (read/write).

    As I wrote before, in my opinion, it is a IOPS which are more important, and these drives have 331/353k IOPS (read/write).

    Current generation drives are using Gen4 x4 interface, which have physical limit of 8GB/s. Top performing ones (Samsung 980 Pro, Samsung PM9A1, WD SN850, Corsair MP600 Pro XT...) have transfer speeds reaching effectively 7GB/s, and 1M IOPS.

    Moreover, Optane drives advantage over SSD drives DOES NOT lies in raw transfer speeds (peek GB/s), but rather in their extremely low latencies comparing to SSD drives, which is seen in great IOPS/latency numbers and random access workloads (i.e. database usages).

    So no, any IOPS (especially these rather lows 300k) are no reason/excuse to display artificial (above interface limits) numbers for sequential transfer rates. Even for top class Optane drives any other benchmark does not display such broken transfer speeds exceeding physical interface limits.

    ps. regarding PCI-e lanes, yes, on different motherboards you could have different numbers, but still its effective throughput can't exceed what is present on CPU socket. So if desktop class Ryzen CPU have 24 lanes of Gen4, than you could have AT MOST these 24 (minus few for basic I/O) lanes of Gen4 on the board, or more than 24 but part of them will be slower (half speed Gen3 not Gen4). So Gen4 x24 is the limit.

Sign In or Register to comment.