Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


jsg, the "Server Review King": can you trust him? Is YABS misleading you? - Page 6
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

jsg, the "Server Review King": can you trust him? Is YABS misleading you?

13468911

Comments

  • ArkasArkas Moderator

    @stevewatson301 said: Even when other members, including reputed providers, pointed out that Hostsolutions was selling off hardware and IPs, jsg repeatedly came up with wrong explanation of how hardware and BGP announcements worked

    I have read it (as it was happening, all of it with the long non stop posts). My point is still the same. I am not disagreeing with you in theory and practice, I am disagreeing with the intentions behind the thread and its initial wording. Other than that, we have no real differences.

  • ArkasArkas Moderator

    @chocolateshirt said: Nah you are definitely clueless old man. Hahaa.. You are better lurking than post a clueless & pointless comment

    Wow, the idiot post got to you eh? I guess even idiots know, somewhere deep inside, that they are idiots. At least that answers a few questions I had. I'll take it easy on you idiot from now on.

    Thanked by 2chocolateshirt jsg
  • Nah, definitely I will not prolong discussion with you. Clueless old man. Period.

    Thanked by 1Arkas
  • ArkasArkas Moderator

    @chocolateshirt said: Nah, definitely I will not prolong discussion with you. Clueless old man. Period.

    Idiot :wink:

    Thanked by 1chocolateshirt
  • @Arkas said: I am disagreeing with the intentions behind the thread

    Can you please explain this to me? I want to understand what's your point here?

  • ArkasArkas Moderator
    edited September 2021

    @samm said: Can you please explain this to me? I want to understand what's your point here?

    I don't think I can make it clearer.
    It started in another thread, and then a new one was created, which, IMO appears to be a thread that was designed to become, well, this.

  • bulbasaurbulbasaur Member
    edited September 2021

    So, I took some time and effort to run my tests on a public cloud provider so that anyone could test it. However, @jsg interpreted the very fact that it was done on a public cloud provider as an indication that the public cloud provider might have different IOPS characteristics, although this absolutely wasn't the case through the fio tests immediately proved him otherwise.

    He (like many others) isn't willing to spend money on cloud providers, so I'm working on something that everyone can test using virtualization software, at their laptops or desktops, no "evil" Amazon required. Stay tuned.

    Thanked by 1fragpic
  • @stevewatson301 said: Stay tuned.

    ~ETA?

  • jsgjsg Member, Resident Benchmarker

    FWIW: My position on the Contabo NVMe has changed, see here -> https://www.lowendtalk.com/discussion/comment/3273873/#Comment_3273873

    The reason is that some facts have changed and there is new data to back it up. To avoid misunderstandings: the reason is absolutely not some mischievous users here and their "I don't care if others are bleeding. I'm here for what I take to be fun" gang.

    I follow the data. If they change, my view changes.

  • @LTniger said:

    @stevewatson301 said: Stay tuned.

    ~ETA?

    Monday™, possibly sooner.

    Thanked by 2Levi TimboJones
    • Do companies bend the numbers? Yes
    • Is testing inaccurate? Could be

    There is a valid reason to stick with one tool for a certain baseline. Different tools do different workloads even when the same testing technique is applied. The problem will still remain that one tool performs better on hosting X then some other tool on Y.

    This is not unique to this thread and happens pretty much with everything and it's healthy to discuss difference between testing tools. I wouldn't have my own testing tool though, use something proven community driven

  • So, the problems with vpsbench that I pointed out can be reproduced outside of Amazon as well. My new VM based test is described below.

    What we're testing

    The point of contention, is whether vpsbench can be trusted to accurately report and be a good indicator of disk performance. The best way to test this is to take a disk with known performance characteristics.

    I did this using Amazon gp3 disks limited at a lower bound of either 3000 IOPS or 125 MB/s, which did demonstrate vpsbench reporting numbers that were way off. @jsg disagrees with this methodology as he believes that a virtualized storage framework (which Amazon is surely using to offer gp3 disks) could hard limit I/O, but this was proven to him with fio benchmarks.

    As this was still deemed unsatisfactory (unfortunately, with unsubstantiated name calling and theoretical hypothesizing), so I'll use a much less controversial way to limit I/O: native mechanisms of the Linux kernel. Specifically, I'll use Docker to place these limits. The only problem is that this experiment would be much less tightly controlled as not everyone has the exact same hardware as me, but the general issue should be reproducible on any modern system.

    Specifically, this test will emulate a slow disk by instructing the OS to slow down or deprioritize I/O request from the Docker container, whereas in the initial test, the gp3 disk presented a disk that appears to the OS as slow. This difference should be insignificant though, as all userspace applications (applications that require an OS to run) have to pass I/O requests through the kernel, and a slow disk would appear the same as the OS slowly processing I/O requests.

    Prerequisites

    As mentioned, this experiment is far looser than the first one because it has to be reproducible on any modern Linux system. However, in the interest of transparency, this is what I used to test:

    • A Virtualbox VM with 4 cores and 2GB RAM, 20GB disk backed by a fixed size VDI file (as opposed to dynamically allocated, to prevent any unexpected I/O performance variation).
    • Ubuntu 21.04 with / being formatted as xfs (this is for disk I/O quotas to work).
    • The underlying hardware is an Intel Mac Pro 2019 16GB variant, but like I said it can be reproduced elsewhere as long you're running Linux. It could be your VPS, a dedi, Linux desktop, or a VM like in my case. (Although if you run it on a VPS, ensure that it has disk sufficient CPU and a consistent disk performance for this test to be run).
    • Make sure you're shutting down any unnecessary processes on the host if you're going along with VMs, since they may consume resources and skew the test.

    The tests

    To get an idea of what kind of I/O performance the VM itself has (I'll keep referring to our system as a VM from now on as that is where I tested), I'll begin off with two fio tests, a 4k and a 1m random read-write(50%-50%) test. Make sure to run these multiple times to get an accurate estimate of performance.

    fio --name=rand_rw_4k --ioengine=libaio --rw=randrw --rwmixread=50 --bs=4k --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    
    fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    

    In my case, I ended up with these, which should be fairly representative given my other runs:

    steve@bench:~$ fio --name=rand_rw_4k --ioengine=libaio --rw=randrw --rwmixread=50 --bs=4k --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_4k: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
    ...
    fio-3.25
    Starting 2 processes
    Jobs: 2 (f=2): [m(2)][100.0%][r=64.0MiB/s,w=64.2MiB/s][r=16.4k,w=16.4k IOPS][eta 00m:00s]
    rand_rw_4k: (groupid=0, jobs=2): err= 0: pid=4225: Mon Sep  6 13:30:37 2021
    read: IOPS=15.9k, BW=62.2MiB/s (65.2MB/s)(1866MiB/30011msec)
    bw (  KiB/s): min=42240, max=87008, per=100.00%, avg=63685.92, stdev=5268.42, samples=118
    iops        : min=10560, max=21752, avg=15921.41, stdev=1317.10, samples=118
    write: IOPS=15.9k, BW=62.3MiB/s (65.3MB/s)(1869MiB/30011msec); 0 zone resets
    bw (  KiB/s): min=42240, max=86400, per=100.00%, avg=63814.92, stdev=5274.96, samples=118
    iops        : min=10560, max=21600, avg=15953.68, stdev=1318.73, samples=118
    cpu          : usr=0.89%, sys=67.66%, ctx=289743, majf=0, minf=16
    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
        submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
        complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
        issued rwts: total=477579,478513,0,0 short=0,0,0,0 dropped=0,0,0,0
        latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
    READ: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=1866MiB (1956MB), run=30011-30011msec
    WRITE: bw=62.3MiB/s (65.3MB/s), 62.3MiB/s-62.3MiB/s (65.3MB/s-65.3MB/s), io=1869MiB (1960MB), run=30011-30011msec
    
    Disk stats (read/write):
    sda: ios=475699/476618, merge=4/6, ticks=569862/202852, in_queue=772738, util=99.85%
    
    steve@bench:~$ fio --name=rand_rw_1m --ioengine=libaio --rw=randrw --rwmixread=50 --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_rw_1m: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.25
    Starting 2 processes
    Jobs: 2 (f=2): [m(2)][-.-%][r=618MiB/s,w=657MiB/s][r=618,w=657 IOPS][eta 00m:00s]
    rand_rw_1m: (groupid=0, jobs=2): err= 0: pid=4233: Mon Sep  6 13:31:00 2021
    read: IOPS=622, BW=622MiB/s (653MB/s)(1982MiB/3184msec)
    bw (  KiB/s): min=546816, max=691557, per=97.91%, avg=624077.00, stdev=23738.61, samples=12
    iops        : min=  534, max=  675, avg=609.33, stdev=23.13, samples=12
    write: IOPS=663, BW=664MiB/s (696MB/s)(2114MiB/3184msec); 0 zone resets
    bw (  KiB/s): min=563200, max=735232, per=97.72%, avg=664347.00, stdev=31550.43, samples=12
    iops        : min=  550, max=  718, avg=648.67, stdev=30.79, samples=12
    cpu          : usr=1.07%, sys=2.48%, ctx=713, majf=0, minf=18
    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9%
        submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
        complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
        issued rwts: total=1982,2114,0,0 short=0,0,0,0 dropped=0,0,0,0
        latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
    READ: bw=622MiB/s (653MB/s), 622MiB/s-622MiB/s (653MB/s-653MB/s), io=1982MiB (2078MB), run=3184-3184msec
    WRITE: bw=664MiB/s (696MB/s), 664MiB/s-664MiB/s (696MB/s-696MB/s), io=2114MiB (2217MB), run=3184-3184msec
    
    Disk stats (read/write):
    sda: ios=2067/2281, merge=0/34, ticks=70990/116843, in_queue=187925, util=95.96%
    

    From the 4k test, we see that our disk can handle a minimum of 10560 read-only/write-only IOPS as per the the 4k test, and a minimum throughput of 620MB/s for the 1m test. This gives us a maximum threshold of how much I/O our VM can safely handle.

    Now I'll create a Docker container that's limited to a low value of IOPS that is below what you've inferred from the above tests. In the command below, I limit it to 1500 IOPS and a throughput of 65 MB in each direction (read/write):

    docker run --volume $PWD:/opt:ro,delegated --device-read-iops=/dev/sda:1500 --device-write-iops=/dev/sda:1500 --device-read-bps=/dev/sda:65mb --device-write-bps=/dev/sda:65mb --rm -it ubuntu:21.04
    

    For YABS to work (you could also use fio directly, but YABS just a thin wrapper around fio anyway), I install fio. This step is technically not required, but then I'd have to fudge around with curl so that YABS can download fio. As a nice side effect, since both the VM and the container are on Ubuntu 21.04, it keeps the fio version the same on both:

    apt update && apt install -y --no-install-recommends fio
    

    Now, if I run vpsbench, this is what I get:

    root@abd8789bca66:/# /opt/vpsb-lx64-210a 
    Version 2.1.0a, (c) 2018+ jsg (->lowendtalk.com)
    Machine: amd64, Arch.: x86_64, Model: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
    OS, version: Linux 5.11.0, Mem.: 1.954 GB
    CPU - Cores: 4, Family/Model/Stepping: 6/158/10
    Cache: 32K/32K L1d/L1i, 256K L2, 12M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
            pse36 cflsh mmx fxsr sse sse2 htt sse3 pclmulqdq ssse3 cx16 pcid
            sse4_1 sse4_2 x2apic movbe popcnt aes xsave osxsave avx rdrnd
            hypervisor
    Ext. Flags: fsgsbase avx2 invpcid fpcsds rdseed clflushopt syscall nx rdtscp lm
            lahf_lm lzcnt
    
    [T] 2021-09-06T13:36:28Z
    ----- Processor and Memory -----
    .................
    [PM-SC] 457.86 MB/s (testSize: 2.0 GB)
    .....
    [PM-MA]   1.22 GB/s (testSize: 2.0 GB)
    .................
    [PM-MB]   1.37 GB/s (testSize: 8.0 GB)
    ----- Disk -----
    [D] Total size per test = 2048.00 MB, Mode: Sync
    ..... [D] Wr Seq:  29.29 s ~    69.91 MB/s
    ..... [D] Wr Rnd:  29.48 s ~    69.48 MB/s
    ..... [D] Rd Seq:  29.27 s ~    69.97 MB/s
    ..... [D] Rd Rnd:   0.90 s ~  2272.97 MB/s
    ----- Network -----
    Error: Can't open file ntargets
    

    Unlike the Amazon EC2 test, where the Wr Seq/Wr Rnd values were somewhat off and Rd Seq/Rd Rnd was way off , in this environment, vpsbench manages to detect the I/O limits pretty closely for the first three tests (69 MB/s as opposed to 65 MB/s). But then we have the fourth value which, is way off similar to my original post. It also defies any reasonable explanation -- how does a random read test manage to outperform a sequential read test?!

    Here are the YABS results for comparison:

    root@abd8789bca66:/# /opt/yabs.sh -gi 
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2021-06-05                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Mon Sep  6 13:39:16 UTC 2021
    
    Basic System Information:
    ---------------------------------
    Processor  : Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
    CPU cores  : 4 @ 2592.000 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ❌ Disabled
    RAM        : 1.9 GiB
    Swap       : 2.0 GiB
    Disk       : 20.0 GiB
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
    ------   | ---            ----  | ----           ---- 
    Read       | 5.97 MB/s     (1.4k) | 66.27 MB/s    (1.0k)
    Write      | 5.96 MB/s     (1.4k) | 66.70 MB/s    (1.0k)
    Total      | 11.94 MB/s    (2.9k) | 132.98 MB/s   (2.0k)
            |                      |                     
    Block Size | 512k          (IOPS) | 1m            (IOPS)
    ------   | ---            ----  | ----           ---- 
    Read       | 63.52 MB/s     (124) | 62.33 MB/s      (60)
    Write      | 66.60 MB/s     (130) | 66.55 MB/s      (64)
    Total      | 130.12 MB/s    (254) | 128.89 MB/s    (124)
    

    These tests are quite close to what's expected. The maximum IOPS detected is slightly lower [2.9k instead of 3k] and the throughput is also slightly off, but with a lower margin of error than vpsbench does.

    I've run these multiple times to ensure that these results are not one-offs, and you're encouraged to walk through them yourself.

    Conclusion

    You cannot generally trust vpsbench to accurately represent disk speeds and especially read speeds. If you use the Linux version, you could trust the write speeds although based on the environment, they can have a fair margin of error.

  • @stevewatson301 well it's just fio's version of truth and vpsbench's version of truth.
    Choose which truth you trust and keep it simple.

  • The truth is out there.

  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @stevewatson301

    Your first problem is that you start on a bunch of assumptions, e.g. that Docker, virtualbox, or whatever actually respect -and- can enforce the limits you set. This however is only true to some degree, because NO OS or hypervisor really tightly controls limits, emphasis on 'tightly'. Simple reason: it doesn't come for free hence the question arises "where on the axis do we want to be?" or in other words, how much time and how tight a control loop are we willing to spend in order to achieve some level of tight control? As always with that kind of question, and they arise very often in computing, there's some trade-off and one must decide where on the axis between the two extremes ("No control at all, no cost at all" vs. "Perfect control, very high cost") one wants to be.

    Gladly, benchmarks don't care much, they just work with and measure what is available, be that GB per second or be it KB/s - just like the applications users run.

    Two points that you seem to consistently fail to understand:
    (a) benchmarks are different and you just can't take benchmark A as "the reference" to judge benchmark B.
    vpsbench works differently from fio and particularly yabs. For example, unlike yabs which uses AIO and even multiple tasks, vpsbench intentionally uses single-threaded testing and standard IO. Why? To name just one reason, the vast majority of programs in the vast majority of cases do not use AIO and it's of questionable value, especially on LET (hint: low end [boxes and] talk) to use multithreading when many systems under tests only have 1 vCore and often not a lot of memory. Also while multi-threading overhead has significantly decreased on modern OSs it still doesn't come for free. That's why vpsbench only uses it in network testing, where a really significant blocking time is to be expected.
    (b) A weird result, if obtained by well time proven code, doesn't mean that the code or algorithm is bad. It simply means that there is something out of the ordinary behind the scenes, often an actually interesting mechanism.
    And indeed I like the fact that vpsbench does detect those things! Where you see a problem or defect I see an interesting and in fact often valuable hint, typically pointing something out to me (e.g. about the caching of the node).

    I don't want to talk about fio, especially not bad; I'm confident that it does provide what the author wanted to get with his tool. But I - intentionally - use another approach, namely a very simple, "boring" standard mechanism ("read or write X bytes Y times") and focus on precise timing results. Plus, as I already said many times, I intentionally try to be a good neighbour by pausing between "blocks"/slices; it's just a few milliseconds but that makes all the difference between behaving like a pig on a VPS or like a gentleman. But of course some milliseconds is a lot of time on a fast system, enough time, for example, to update many caches.

    As for what you complain about, the "weird" random read result, that's relatively simple to explain: given the memory is available a modern OS know that it needs to cache large chunks or if possible even the whole file. With a test file size of 2 GB that's well feasible. The result is the "weird" high value you see because unlike with sequential reading, the read ends up in a large cache most of the time rather than on the drive. fio (as used by yabs) doesn't see that because it works quite differently. That doesn't mean that yabs/fio is a bad benchmark - but nor does it mean that mine is bad. They are different and had different goals.

    Btw. keep in mind that AIO doesn't really make the IO faster. AIO is meant to have its callers (usually programs or libraries) waste less time on blocking.

  • @jsg said:
    @stevewatson301

    Your first problem is that you start on a bunch of assumptions, e.g. that Docker, virtualbox, or whatever actually respect -and- can enforce the limits you set. This however is only true to some degree, because NO OS or hypervisor really tightly controls limits, emphasis on 'tightly'.

    You can see in his previous tests that the YABS result was in line with the limits he set, which proves what you said wrong.

  • @drunkendog if the OS, hypervisor can really tightly controls limits, emphasis on 'tightly', it's just another version of truth.

    Well I see that version of truth a lot, on nearly a hundred of servers I'm maintaining. But maybe it's just me, maybe a different version of truth exists around.

  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @drunkendog said:

    @jsg said:
    @stevewatson301

    Your first problem is that you start on a bunch of assumptions, e.g. that Docker, virtualbox, or whatever actually respect -and- can enforce the limits you set. This however is only true to some degree, because NO OS or hypervisor really tightly controls limits, emphasis on 'tightly'.

    You can see in his previous tests that the YABS result was in line with the limits he set, which proves what you said wrong.

    No, it doesn't. (a) if it proves anything then it proves that the limiting mechanism worked better with the way yabs/fio does its testing than with the way vpsbench does its testing. And btw, even that is questionable because those limits are on IO but not about OS caching. And (b) we are talking about a rather small difference, about 66 MB/s vs. about 69 MB/s which is easily explained by my tiny pauses between reads/write allowing for some cache filling or in fact even by the different ways the 2 benchmarks do their testing (e.g. multiple reads/writes vs single reads/writes).

    Btw, (most) users are not interested anyway in how well IO can be limited. They are interested in how much IO they can get/expect.

    And your comment doesn't show my comment you quoted wrong in any way.

  • @jsg said: limiting mechanism worked better with the way yabs/fio does its testing than with the way vpsbench does its testing

    So my recommendation would be to not perform extremely small slices of tests and linearly extrapolate them to very large values, as this is proving to be quite the incorrect way to infer performance of a system.

    If you want more evidence for why this is the case, get one of those Oracle free tier instances with their 50 MB/s caps -- quite slow for any real task, but this is what the value of random read is reported as on those instances:

    ..... [D] Rd Rnd:   1.60 s ~  1276.17 MB/s
    

    @jsg said: They are interested in how much IO they can get/expect.

    And it fails to do that, because most providers, including those on LET, have I/O caps, otherwise people would trash the disk to shit with torrents and Chia. Also, @Falzo even provided a test from his dedi that shows near RAM speeds, which should tell you that something with your benchmark is not quite right?

    @jsg said: OS caching

    When placing I/O limits at the OS level, all I/O syscalls are slowed down. And with fio, direct=1 (or O_DIRECT in Linux syscall terms) should bypass the OS caches.

    @jsg said: 69 MB/s which is easily explained by my tiny pauses

    For this test, the issue is with the Rd. Rnd value and even I said as much.

    @stevewatson301 said: vpsbench manages to detect the I/O limits pretty closely for the first three tests

    Of course, on that first with AWS test both read values are inaccurate, so this is not a recommendation that one can trust either of the Rd Seq and Rd Rnd values.

    @jsg said: AIO doesn't really make the IO faster

    Not sure why you bring up this difference, I just happen to use the aio engine but of course with the kinds of I/O tests being performed, I believe it can be swapped out for any other implementation as none of the disks that I'm using (or the I/O limits that I've placed) have sufficient performance for these differences to be significant enough.

  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @stevewatson301 said:
    So my recommendation would be to not perform extremely small slices of tests and linearly extrapolate them to very large values, as this is proving to be quite the incorrect way to infer performance of a system.

    One can do tests with small slice sizes, medium slice sizes, large slice sizes ... just like with fio (and many other benchmark tools)

    ..... [D] Rd Rnd:   1.60 s ~  1276.17 MB/s
    

    @jsg said: They are interested in how much IO they can get/expect.

    And it fails to do that, because most providers, including those on LET, have I/O caps, otherwise people would trash the disk to shit with torrents and Chia. Also, @Falzo even provided a test from his dedi that shows near RAM speeds, which should tell you that something with your benchmark is not quite right?

    I have been asked how vpsbench works and I have laid out how it works. All tests, read or write, sequential or random, buffered or sync use exactly the same routine, except for the random tests not working sequentially but with random locations.
    Yet you either ignore that or don't grasp it but stubbornly make statements that are definitely false. And of course you pick the results that fit your agenda, btw not even understanding that those "weird" values actually contain interesting information.

    @jsg said: OS caching

    When placing I/O limits at the OS level, all I/O syscalls are slowed down. And with fio, direct=1 (or O_DIRECT in Linux syscall terms) should bypass the OS caches.

    And again you seem to not really understand what you are talking about. For a start those calls (mostly) bypass the guest VMs caches but not the nodes caches. Plus there is a difference between writing and reading. Plus fio's (as used by yabs) way of working is radically different from vpsbench's; of bloody course you get different results and of bloody course throwing 64 or whatever calls basically simultaneously at the OS vs. doing one call at a time leads to very significant changes in cache performance.

    So, if a user happens to be interested only in some specific use case, say for a database, using fio with the right parameters is better than vpsbench, but I don't care because I'm after the performance a user can expect generally.
    There is only one thing so far I see worth of some consideration: vpsbbench - until now - only tested with one (1) block/slice size. I understand that it would be desirable to test with a small (4 KB), a medium (64 KB), and a large-ish (1 MB) block size. And that "lesson" I not only learned from listening to feedback (not yours though) but I already implemented it in my newest version (another good example of the utterly idiotic and nasty allegation that I never listen to user feedback, yada yada, being ridiculously wrong).

    I'm getting tired by you again and again turning your lack of understanding into alleged weaknesses of my vpsbench. Plus you totally ignore anything not fitting your private war.

  • @Arkas said:

    @TimboJones said: What other alt accounts do you have here?

    I don't have any, don't need to hide behind them like apparently you do. An endorsement by you to the OP doesn't give him more rep, it lowers it because of the weight of yours, which is heavy and of low quality.

    Please link to where you have actually ran the program and concur with the results of vpsbench? I have tested it and explained in detail the problems with the results for 1-2 years. Wtf makes you think LET gives a flying fuck about your useless opinions?

    Thanked by 1chocolateshirt
  • @Arkas said:

    @chocolateshirt said: Look like you are old enough to read, please read a lot on LET especially discussion within 3 or 4 months before on Chest Pit, Cociu, Dedicat, and Contabo threads before made such clueless & pointless comment.

    Be like Robert de Niro, he is good performer and know how to be a good guy.

    I've been here longer than you have, so I don't understand your POV. I've read all those threads, what's that got to do with this one?
    Stop eating too much chocolate, it's messing with your brain :smile:

    15 visits in 6 years? Reading all those threads? And no alt accounts? I call shenanigans.

    Thanked by 1chocolateshirt
  • @comXyz said:
    @stevewatson301 well it's just fio's version of truth and vpsbench's version of truth.
    Choose which truth you trust and keep it simple.

    This isn't a case of two watches, one slightly fast and one slightly slow. This is one being slightly off and one being the wrong hour and minute.

  • @TimboJones said:

    @comXyz said:
    @stevewatson301 well it's just fio's version of truth and vpsbench's version of truth.
    Choose which truth you trust and keep it simple.

    This isn't a case of two watches, one slightly fast and one slightly slow. This is one being slightly off and one being the wrong hour and minute.

    I think it's more like 2 watches with different methods of timekeeping - contradicting each other, in this analogy..

  • @jsg said:
    As for what you complain about, the "weird" random read result, that's relatively simple to explain: given the memory is available a modern OS know that it needs to cache large chunks or if possible even the whole file. With a test file size of 2 GB that's well feasible. The result is the "weird" high value you see because unlike with sequential reading, the read ends up in a large cache most of the time rather than on the drive. fio (as used by yabs) doesn't see that because it works quite differently. That doesn't mean that yabs/fio is a bad benchmark - but nor does it mean that mine is bad. They are different and had different goals.

    OMG, this screams "you're doing it wrong". It's especially egregious considering all the boasting about all the caching testing you've done and supposedly do things to avoid caching affecting results (if I wasn't on my phone, I'd dig up your previous posts on this topic).

    You're saying you're perfectly ok with a disk benchmark where NO data is read from storage media (since its completely in cache, which is defective test setup in the first place). That's facepalm fail.

    (Also, a random read test uses a single 2GB file?!?!)

    Thanked by 1vimalware
  • ArkasArkas Moderator

    @TimboJones said: Wtf makes you think LET gives a flying fuck about your useless opinions?

    Who made you the LET King you dumbass hillbilly.

  • TimboJonesTimboJones Member
    edited September 2021

    @dahartigan said:

    @TimboJones said:

    @comXyz said:
    @stevewatson301 well it's just fio's version of truth and vpsbench's version of truth.
    Choose which truth you trust and keep it simple.

    This isn't a case of two watches, one slightly fast and one slightly slow. This is one being slightly off and one being the wrong hour and minute.

    I think it's more like 2 watches with different methods of timekeeping - contradicting each other, in this analogy..

    That sounds like an MJJ answer ;)

    (Our clock and calendar, all else can GFT).

    Thanked by 1dahartigan
  • @Arkas said:

    @TimboJones said: Wtf makes you think LET gives a flying fuck about your useless opinions?

    Who made you the LET King you dumbass hillbilly.

    Please use proper punctuation when making insults, especially a hillbilly one. eyeroll

  • skorousskorous Member
    edited September 2021

    @Arkas said:

    @TimboJones said: Wtf makes you think LET gives a flying fuck about your useless opinions?

    Who made you the LET King you dumbass hillbilly.

    Hillbillies are an American thing. If Letterkenny is to be believed the Canadian term is 'hicks'. Rednecks however are geographically unrestricted. Go with that next time.

    Edit: typo

  • ArkasArkas Moderator

    Ok, you dumbass Redneck :wink:
    Actually I think redneck is more offensive as it has racial undertones to it.

Sign In or Register to comment.