Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


jsg, the "Server Review King": can you trust him? Is YABS misleading you? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

jsg, the "Server Review King": can you trust him? Is YABS misleading you?

2456711

Comments

  • NeoonNeoon Community Contributor, Veteran

    YABS is just a picture of a moment, can be used as propaganda if you want too.
    Run it on an empty node and profit.

    Saw it a few times after the sale or a few months later, performance was lot worse.
    Not always the case, but sometimes.

    You would need consistent benchmark updates, but you still don't know on which node he is and if the node is loaded the same as other nodes.

    Even if the provider dosen't have an interest to manipulate it, they could still do it indirectly, if he is lucky to be on a node, that has less virtual machines or/and load which results in better benchmark results.

  • mrclownmrclown Member
    edited September 2021

    What are we supposed to see in this thread? Fixing bug ? Someone to admit his script or his test is wrong? Relax.

    Most of us just test out the performance to compare with another provider using same tool. Don't think we ever compare the tools and accept/reject the results as wrong.

    No offense I usually use bench.sh, bench.monster, speed test (particular location), wget (test file download) and yabs on fresh new dedi/vps to compare between different network, provider and node but most servers went into idle mode after that. :smile:

  • bulbasaurbulbasaur Member
    edited September 2021

    @Neoon said: they could still do it indirectly, if he is lucky to be on a node

    Of course! I'm not saying that YABS is the be-all and end-all and providers could put you on empty nodes too.

    I've had providers who went ahead and gave my YABSes, but when I bought their service their disk performance was so throttled that I couldn't log in via SSH. Their support would say "please log in to the server and give us a YABS" and when I did manage to login, everything would seem normal as per YABS, only for the VPS to become unresponsive and throttled again.

    But, anyone relying on jsg's vpsbench is gonna get a very skewed result and you can check the Contabo thread and even my analysis for how badly it skews results. That's why I went out of my way to use AWS for this test to ensure that I could be as fair and consistent in judging both benchmark tools.

    @mrclown said: What are we supposed to see in this thread?

    Don't go by the titles assigned to a user on LET. The addition of "Server Review King" almost makes him seem like an authority on the subject, when he's actively misleading people.

  • I still reckon good old bench.sh is enough for me

  • kasslekassle Member
    edited September 2021

    wait, you all bench your vps'es ?

    next: RGB VPS trend

    #VPSMR

    Thanked by 1dahartigan
  • @jackb said:

    @jsg said:

    @jackb said:

    @jsg said:
    Now, let's look at your methodology. Basically you are saying that vpsbench must be wrong because Amazon certainly wouldn't lie. Well, that's one way to look at things. But it's not mine. A benchmark is about getting the data, the facts, not about trusting company

    It looks heavily like your read tests are hitting RAM to me. There's no way Amazon are giving people 8GB/s disk read (sync).

    Evidently, yes. But testing VMs necessarily means testing a virtual machine which translates to e.g. diverse sorts and levels of caches beneath the VM.

    Surely a benchmark tool testing a cache should be listed as such? Fio is giving what looks like the correct result and your tool is giving a cached result. You keep evading this point - which I've raised before - rather than taking the feedback and fixing your tool.

    If this was down to VM or host config both fio and your tool should be equally impacted.

    there is something off with the numbers that disk bench gives in general and I doubt this is even related to caching. here is a bench from a dedi, spinning rust in soft raid1:

    [T] 2021-09-05T11:40:34Z
    ----- Processor and Memory -----
    .................
    [PM-SC] 423.84 MB/s (testSize: 2.0 GB)
    .
    [PM-MA]   1.02 GB/s (testSize: 2.0 GB)
    .................................
    [PM-MB]   1.10 GB/s (testSize: 16.0 GB)
    ----- Disk -----
    [D] Total size per test = 2048.00 MB, Mode: Sync
    ..... [D] Wr Seq:  38.48 s ~    53.22 MB/s
    ..... [D] Wr Rnd:  23.73 s ~    86.29 MB/s
    ..... [D] Rd Seq:   0.26 s ~  7942.94 MB/s
    ..... [D] Rd Rnd:   0.28 s ~  7382.23 MB/s
    ----- Network -----
    

    close to 8GB/s from sata harddisk would be a dream... but even memory won't get close to this (a cached dd with 2G of zeroes ends up being about 1GB/s on that system).

    it has been pointed out before that these numbers represent something else and can't be compared to what other would define as read sequential/read random, mode sync - but I guess, that's how it is. maybe there even is an oversight in the calculation, without looking at the code, no one will no what exactly is going on there or how this number is determined.
    maybe it's some sort of averaging, calculation or aggregation and it's lacking a divider or whatever.
    maybe it is intended like this, but there are not much people to understand why these kind of numbers are the better ones.

  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @jackb said:
    Surely a benchmark tool testing a cache should be listed as such? Fio is giving what looks like the correct result and your tool is giving a cached result. You keep evading this point - which I've raised before - rather than taking the feedback and fixing your tool.

    Every benchmark program running on and testing a VM is subject to the fact that caching is going on (plus other factors) beneath the VM on the node. There is nothing to evade, we are talking about a simple fact.

    If this was down to VM or host config both fio and your tool should be equally impacted.

    ... IF both did the same tests with the same parameters - which is rarely the case.

    @stevewatson301 said:

    @jsg said:

    @stevewatson301 said:

    @jsg said: What disk performance precisely?

    What's unclear about "3000 IOPS or 125 MB/s", whichever is greater? Did you read the AWS documentation?

    Read or write performance? Buffered or direct/sync? Sequential or random? There are other factors too like e.g. what data (well cacheable or random and if the latter then what quality).

    Buffering at the OS level is, well, an OS level concern and I'm not sure why you want to indicate that AWS would make guarantees about the OS. Of course it's possible to have buffering at the hardware level too, but that is part of the hardware and all IOPS are limited there.

    Misunderstanding it seems. I was talking about what is measured, not about the OS.

    I'll post findings below this reply that shows that this is the case.

    And that's not just me. Everyone in the field worth his salt knows that one can - and in marketing usually does - bend the numbers. Typical example: of course one publishes the best result out of diverse tests.

    I'm not sure why AWS wouldn't want to pick the 4000MB/s or 7000MB/s over their measly 125MB/s. Wouldn't it make them look better?

    That's something you must ask Amazon, not me. And it doesn't change what I said.
    Anyway the question what those numbers mean is justified. Does it mean that your AWS VM model does achieve "3000 IOPS or 125 MB/s, whichever is greater" minimum? Or average? Each and every second of each and every day? Or on average, and if so, over what time period? And measuring what? read or write, sequential or random, buffered or direct/sync? If you don't have answers to that then "3000 IOPS or 125 MB/s, whichever is greater" has very little value.

    Also: please, already get it! You can not willy nilly compare results of different benchmarks, in particular when those benchmarks work significantly differently!

    One particularly striking example is that my reviews are not based on one (1) or a few runs but usually on 100 or more runs at different times of day (not predictable) and at all times of day. In fact "benchmark xyz always delivers very similar results" is not an indicator of quality because by their very nature VPS deliver quite different results over say a day.

    Plus there is much going on below the surface. One example: I went to great lengths was to really test random disk IO, meaning that both the data and the location (in the test file/on the disk) are truly random. And by truly random I mean that there is a well accepted good quality random generator at work that has passed all the usual tests (like e.g. practrand, Bigcrush, etc).
    Why? Because modern operating systems aggressively cache and may simply write out the buffer again when the data to be written are the same; or they may notice that a location is a known step away from the last location, for example because a "random" IO test actually just skips a certain number of sectors between IO operations.
    I actually tested that. I did exactly the same operations, one time with a fixed skip, and one time with truly random skips, and there was a noticeable difference.

  • jsgjsg Member, Resident Benchmarker

    @stevewatson301 said:
    [series of fio runs/results]

    Uhm, where do you see 300 IOPS?

  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @Falzo said:

    @jackb said:

    @jsg said:

    @jackb said:

    @jsg said:
    Now, let's look at your methodology. Basically you are saying that vpsbench must be wrong because Amazon certainly wouldn't lie. Well, that's one way to look at things. But it's not mine. A benchmark is about getting the data, the facts, not about trusting company

    It looks heavily like your read tests are hitting RAM to me. There's no way Amazon are giving people 8GB/s disk read (sync).

    Evidently, yes. But testing VMs necessarily means testing a virtual machine which translates to e.g. diverse sorts and levels of caches beneath the VM.

    Surely a benchmark tool testing a cache should be listed as such? Fio is giving what looks like the correct result and your tool is giving a cached result. You keep evading this point - which I've raised before - rather than taking the feedback and fixing your tool.

    If this was down to VM or host config both fio and your tool should be equally impacted.

    there is something off with the numbers that disk bench gives in general and I doubt this is even related to caching. here is a bench from a dedi, spinning rust in soft raid1:

    [result set]

    close to 8GB/s from sata harddisk would be a dream... but even memory won't get close to this (a cached dd with 2G of zeroes ends up being about 1GB/s on that system).

    it has been pointed out before that these numbers represent something else and can't be compared to what other would define as read sequential/read random, mode sync - but I guess, that's how it is. maybe there even is an oversight in the calculation, without looking at the code, no one will no what exactly is going on there or how this number is determined.
    maybe it's some sort of averaging, calculation or aggregation and it's lacking a divider or whatever.
    maybe it is intended like this, but there are not much people to understand why these kind of numbers are the better ones.

    First congrats, that's a nice processor with great single-core results!

    As for the disk, which OS, what disk controller?

    Plus keep in mind that dedis also pose their own set of problems because there are not dozens or even over hundred VMs sharing a few drives and reading and writing more or less all at the same time. One result of that: caching 2 GB isn't a big thing with enough free memory (which I presume your dedi has). Note that those high numbers are read results. And these (unfortunately, but mainly testing smallish VPSs I have to stay within tight space limits) read what? Right, they read what the write tests have written out before, so on a server with plenty memory chances are that the whole 2 GB still are in the cache (and hence insanely fast to read).

    P.S. I think I may have found a solution for that problem that irritates me too. Give me some days; I'll try that idea and if it works will put it into the benchmark.

  • @jsg said:

    @jackb said:
    Surely a benchmark tool testing a cache should be listed as such? Fio is giving what looks like the correct result and your tool is giving a cached result. You keep evading this point - which I've raised before - rather than taking the feedback and fixing your tool.

    Every benchmark program running on and testing a VM is subject to the fact that caching is going on (plus other factors) beneath the VM on the node. There is nothing to evade, we are talking about a simple fact.

    Your point would be right given if a disk was directly exposed to the VM. Unfortunately, this is not how a provider may choose to implement their services, and they may take a more software defined approach to disk management.

    In this scenario, any I/O requests received by the underlying virtualized hardware is ratelimited to the above mentioned criteria, regardless of whether such requests are random or sequential. Probably bone headed, but that is what is proven by fio and you can get a test instance yourself if you're so inclined.

    @stevewatson301 said:

    @jsg said:

    @stevewatson301 said:

    @jsg said: What disk performance precisely?

    What's unclear about "3000 IOPS or 125 MB/s", whichever is greater? Did you read the AWS documentation?

    Read or write performance? Buffered or direct/sync? Sequential or random? There are other factors too like e.g. what data (well cacheable or random and if the latter then what quality).

    Buffering at the OS level is, well, an OS level concern and I'm not sure why you want to indicate that AWS would make guarantees about the OS. Of course it's possible to have buffering at the hardware level too, but that is part of the hardware and all IOPS are limited there.

    Misunderstanding it seems. I was talking about what is measured, not about the OS.

    Come on, at this point you're not doing anyone any favours, buffering into the disk cache is very much an operating system level concern and either you would be measuring direct I/O or buffered I/O depending on what you're doing.

    I'll post findings below this reply that shows that this is the case.

    And that's not just me. Everyone in the field worth his salt knows that one can - and in marketing usually does - bend the numbers. Typical example: of course one publishes the best result out of diverse tests.

    I'm not sure why AWS wouldn't want to pick the 4000MB/s or 7000MB/s over their measly 125MB/s. Wouldn't it make them look better?

    That's something you must ask Amazon, not me. And it doesn't change what I said.

    If that's your response, then this leaves only a few possibilities:

    1. The reported sky high numbers of 4000/7000 MB/s is wrong and Amazon's figure of 125 MB/s is right.
    2. The low I/O performance reported by FreeBSD version of FreeBSD is right, which means Amazon is wrong. I completely can accept that Amazon is wrong, but then there's another problematic question you have to deal with -- why is there a difference between the FreeBSD version and the Linux version?

    I did try to use multiple instances configured in the same way to see if it was a localized node-level issue, but the reported values always remains the same.

    Anyway the question what those numbers mean is justified. Does it mean that your AWS VM model does achieve "3000 IOPS or 125 MB/s, whichever is greater" minimum? Or average? Each and every second of each and every day? Or on average, and if so, over what time period? And measuring what? read or write, sequential or random, buffered or direct/sync? If you don't have answers to that then "3000 IOPS or 125 MB/s, whichever is greater" has very little value.

    You can go through the fio values that I've given you earlier. It remains the same, regardless of sequential/random reads/writes.

    Also: please, already get it! You can not willy nilly compare results of different benchmarks, in particular when those benchmarks are significantly different!

    One particularly striking example is that my reviews are not based on one (1) or a few runs but usually on 100 or more runs. In fact "benchmark xyz always delivers very similar results" is not an indicator of quality because by their very nature VPS deliver quite different results over say a day.

    Did you forget that I have already mentioned that I tried running your benchmark multiple times and that difference always existed.

    I did exactly the same operations, one time with a fixed skip, and one time with truly random skips.

    Fixed is not truly random. It's good to see you agree.

  • bulbasaurbulbasaur Member
    edited September 2021

    @jsg said:

    @stevewatson301 said:
    [series of fio runs/results]

    Uhm, where do you see 300 IOPS?

    This is with a 1MB block size, which tops out the MB/s limit and therefore you can only see ~130 MB/s and thus around 120 IOPS.

    If you go down to a sufficiently small block size, such as 4k, then the IOPS becomes the bottleneck and you can see the 3000 IOPS limit come into play. At that point the 125 MB/s limit becomes insignificant.

    I should have said whichever is lower instead of greater though, I spot my mistake in one of the previous statements. I've corrected it since.

  • Thanks @stevewatson301 for the write up and the experiment. You used a system with known values and provided all test parameters for reproduction of results.

    @jsg please don't take this as an attack. user feedback is valuable and one of the best ways of improvement.

    If all users send their feedback this way, it would be great. :) normally one would get "it is not working." (without other meaningful info).

    @Falzo said: it has been pointed out before that these numbers represent something else and can't be compared to what other would define as read sequential/read random, mode sync - but I guess, that's how it is. maybe there even is an oversight in the calculation, without looking at the code, no one will no what exactly is going on there or how this number is determined. maybe it's some sort of averaging, calculation or aggregation and it's lacking a divider or whatever. maybe it is intended like this, but there are not much people to understand why these kind of numbers are the better ones.

    User shouldn't guess what the numbers mean, there should be documentation on what every value means. Maybe --help would print this info or a separate page explaining this on the project's web page and some common results for reference too.

    @mrclown said: No offense I usually use bench.sh, bench.monster, speed test (particular location), wget (test file download) and yabs on fresh new dedi/vps to compare between different network, provider and node but most servers went into idle mode after that.

    Haha. We should really compile a page with many useful uses for these idlers. I see suggestions scattered across different threads but not in one place.

    @Dazzle said: Oh man, plese don't post thesis or dissertation. Make it simple

    Would you rather they just write their conclusion without the details on how they arrived to that conclusion? No thanks.

  • jsgjsg Member, Resident Benchmarker

    @stevewatson301 said:
    Your point would be right given if a disk was directly exposed to the VM. Unfortunately, this is not how a provider may choose to implement their services, and they may take a more software defined approach to disk management.

    Please do not confuse what my benchmark does and explanations for certain results.

    At the end of the day my benchmark measures what one can expect no matter the details. If one is about to buy a VPS one wants to know what ones applications can achieve, no matter the details.

    fio (which is used by yabs) is a useful nice test but it was written with a purpose in mind (even laid out in the doc intro) and that purpose is not the purpose vpsbench was designed to serve. Plus yabs only does one single test type and a rather mixed one at that. But still, if someone prefers to use fio/yabs I'm certainly not in their way. Just don't expect me to use a test and methodology I do not consider adequate.

    @stevewatson301 said:

    @jsg said:

    @stevewatson301 said:

    @jsg said: What disk performance precisely?

    What's unclear about "3000 IOPS or 125 MB/s", whichever is greater? Did you read the AWS documentation?

    Read or write performance? Buffered or direct/sync? Sequential or random? There are other factors too like e.g. what data (well cacheable or random and if the latter then what quality).

    Buffering at the OS level is, well, an OS level concern and I'm not sure why you want to indicate that AWS would make guarantees about the OS. Of course it's possible to have buffering at the hardware level too, but that is part of the hardware and all IOPS are limited there.

    Misunderstanding it seems. I was talking about what is measured, not about the OS.

    Come on, at this point you're not doing anyone any favours, buffering into the disk cache is very much an operating system level concern and either you would be measuring direct I/O or buffered I/O depending on what you're doing.

    On a VM one can not control the node OS. If it does caching, and almost always it does, then that is so period and no benchmark can change that.
    That said, I indeed did include the functionality to break through caches, even node caches, but doing that is not what one does in a general VM benchmark, which serves to provide a facts based impression of what to expect of a VM.

    I'm not sure why AWS wouldn't want to pick the 4000MB/s or 7000MB/s over their measly 125MB/s. Wouldn't it make them look better?

    That's something you must ask Amazon, not me. And it doesn't change what I said.

    If that's your response, then this leaves only a few possibilities:

    1. The reported sky high numbers of 4000/7000 MB/s is wrong and Amazon's figure of 125 MB/s is right.
    2. The low I/O performance reported by FreeBSD version of FreeBSD is right, which means Amazon is wrong. I completely can accept that Amazon is wrong, but then there's another problematic question you have to deal with -- why is there a difference between the FreeBSD version and the Linux version?

    No, there is a third option - and a very relevant one: Most LET providers operate quite differently from Amazon/AWS. And, no surprise, you'll rarely find a LET provider guaranteeing x MB/s and y IOPS.

    Anyway the question what those numbers mean is justified. Does it mean that your AWS VM model does achieve "3000 IOPS or 125 MB/s, whichever is greater" minimum? Or average? Each and every second of each and every day? Or on average, and if so, over what time period? And measuring what? read or write, sequential or random, buffered or direct/sync? If you don't have answers to that then "3000 IOPS or 125 MB/s, whichever is greater" has very little value.

    You can go through the fio values that I've given you earlier. It remains the same, regardless of sequential/random reads/writes.

    Maybe I'm looking in the wrong way but the numbers I see are far away from 3000 IOPS.

    Fixed is not truly random. It's good to see you agree.

    True. But it anyway is what often (incl. in some benchmarks) goes as "random access" . And anyway what's to complain about my way of doing it and using a real and quality PRNG and for both the data and the locations? Would you like it better if I did it using a sh_tty generator and only for either the data or the location like quite a few benchmarks do?

  • BlaZeBlaZe Member, Host Rep

    @jsg since when did you get the "Server Review King" title?

    Who gave it? :D

    Should be rather "Essay Competition Winner" or "Extempore King"

  • jsgjsg Member, Resident Benchmarker

    @Kassem said:
    @jsg please don't take this as an attack. user feedback is valuable and one of the best ways of improvement.

    Uhm, I do welcome user feedback. In fact I already have enhanced or adapted vpsbench and my reviews based on user feedback multiple times!
    What I do not welcome is bashing me or my work or occasionally even ganging up on me.

    Again: I have done a lot of work for this community and certainly not because I'm masochistic. One should be able to expect to at least not being rewarded by personal attacks, stubbornly repeated lies, etc. If someone doesn't like my work (or me) they are free to simply ignore it.

  • bulbasaurbulbasaur Member
    edited September 2021

    @jsg said: At the end of the day my benchmark measures what one can expect no matter the details.

    It doesn't, and my tests and observations amply demonstrates so. If you want to prove this is not the case, please provide transparency into your methods by providing reproducible methods and code for how one can achieve these results.

    @jsg said: On a VM one can not control the node OS

    Not sure why you would think so. Outside of containerization technology (in which case it's not really a VM), you do have control of the OS.

    @jsg said: Most LET providers operate quite differently from Amazon/AWS. And, no surprise, you'll rarely find a LET provider guaranteeing x MB/s and y IOPS.

    What does the practices of LET providers have to do with this discussion? The bottom line is your benchmark is unable to find out hardware limits, which is the very purpose of benchmarks like this. If it is unable to do so, it's completely useless.

    @jsg said: Maybe I'm looking in the wrong way but the numbers I see are far away from 3000 IOPS.

    Did you not read my explanation that it's limited to the lower of 3000 IOPS or 125 MB/s? And my explanation of how 1 MB block sizes tops out the limit of 125 MB/s leading to ~125 IOPS? If you test with 4k block sizes, you would see that too.

    Or did you already read but desperately trying to ignore my statements with the hope that I'd not notice and back down and accept?

    @jsg said: I did it using a sh_tty generator

    Your benchmark is already shitty, no need to use a shitty generator.

  • @BlaZe said: Who gave it?

    The same guy who enables and allows the RN spam to continue.

    @BlaZe said: Should be rather "Essay Competition Winner" or "Extempore King"

    Nah, "Russian Troll" is more appropriate.

  • BlaZeBlaZe Member, Host Rep
    edited September 2021

    @stevewatson301 said:

    @BlaZe said: Who gave it?

    The same guy who enables and allows the RN spam to continue.

    Ah!

  • It seems @jsg a little tired, he can do better when he have some good rest.

    Thanked by 2bulbasaur adly
  • jarjar Patron Provider, Top Host, Veteran

    Why doesn't someone just strace the thing and figure out what it's doing? If not strace there are other ways to oversee the work of a binary.

  • bulbasaurbulbasaur Member
    edited September 2021

    @jar said:
    Why doesn't someone just strace the thing and figure out what it's doing? If not strace there are other ways to oversee the work of a binary.

    I wanted to be objective in my analysis and focus on what I could test and control. As far as a strace is concerned, he's spamming the kernel with clock_gettime calls. Of course his smart-assery didn't end there, he then took the chance to educate me how kernels of yore worked and how clock_gettime was used to figure out time :joy:

    As for reverse engineering, my reverse engineering skills are not that good and there are too many layers of indirection going on in their nim-written binary for me to figure out.

    It would be unwise to attack him with speculation, and I wanted to just stick to what I could discover.

  • jsgjsg Member, Resident Benchmarker

    @stevewatson301 said:
    Prerequisites
    As for the testing VPS, I used a AWS EC2 instance with c5.large instance with 10GB gp3 disks which are limited to 3000 IOPS or 125 MB/s, whichever is higher.

    .

    Did you not read my explanation that it's limited to the lower of 3000 IOPS or 125 MB/s?

    Uhum

    Followed by personal attacks and insults ...
    Thanks for confirming what I said:

    @jsg said:
    [a] test of vpsbench wasn't your topic and goal anyway.

    .

    @stevewatson301 said:
    The same guy who enables and allows the RN spam to continue.

    You clearly don't like "that guy" but you stay at his oh so bad place ...

    No further questions, thank you. My focus is on people who constructively contribute to our community, not on evil minded bashers.

  • @jsg said:

    @stevewatson301 said:
    Prerequisites
    As for the testing VPS, I used a AWS EC2 instance with c5.large instance with 10GB gp3 disks which are limited to 3000 IOPS or 125 MB/s, whichever is higher.

    This was a mistake at my end and I've said as much that it should be "lower" (read my most recent replies to you). I'll reach out to a mod to see if it can be edited.

    But realistically, anyone worth their salt (and especially you) should be able to figure that this was a mistake. If "higher" was really the case, you could get infinite MB/s by increasing the block size of your operations.

    Regardless, that small mistake doesn't still detract from the rest of my observations: your benchmark is unable to figure out the limits of hardware.

  • This is going to go nowhere as objective comparisons and constructive criticism is always met with a wall of nonsense and talking around the issue. The goalposts are constantly moved and there will never be a resolution. 🤷‍♂️

  • @adly said: objective comparisons and constructive criticism is always met with a wall of nonsense and talking around the issue

    But the guy now gets to unravel himself and how he hides behind a wall of nonsense to protect his ego. That alone is valuable.

  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @chocolateshirt said:
    It seems @jsg a little tired, he can do better when he have some good rest.

    No, bored by ever new attacks diguised - for a while at least - as "objective review".

    @stevewatson301 said:

    @jar said:
    Why doesn't someone just strace the thing and figure out what it's doing? If not strace there are other ways to oversee the work of a binary.

    I wanted to be objective in my analysis and focus on what I could test and control. As far as a strace is concerned, he's spamming the kernel with clock_gettime calls.

    My benchmark does (a very cheap) call many times? Duh!
    When benchmarking one needs to somehow get the time. There are two basic options, (a) time the whole thing once, or (b) do it in "slices", just like fio and others do, and find out the time used for each slice. Choosing the better route (b) of course leads to many calls, duh. Only a clueless person would call that "spamming the kernel with syscalls".

    As for reverse engineering, my reverse engineering skills are not that good and there are too many layers of indirection going on in their nim-written binary for me to figure out.

    There is not even a need for reverse engineering because I openly laid out how vpsbench does its disk IO tests.

    It would be unwise to attack him with speculation,

    So, why did you do it anyway? And btw. it's unwise to attack anyone with speculation.

    Thanks btw for finally being honest (or clumsy) enough to spill the beans and openly stating what your thread is about: it's about attacking me.

    @stevewatson301 said:

    @adly said: objective comparisons and constructive criticism is always met with a wall of nonsense and talking around the issue

    But the guy now gets to unravel himself and how he hides behind a wall of nonsense to protect his ego. That alone is valuable.

    Hahahaha! Thanks for the laugh!

  • bulbasaurbulbasaur Member
    edited September 2021

    @jsg said: My benchmark does (a very cheap) call many times? Duh!

    It was already explained to you how your call isn't cheap. But since you keep forgetting, I'll leave the link once more: https://en.wikipedia.org/wiki/Kernel_page-table_isolation. And the inaccuracy of your benchmark is out there for everyone to see.

    @jsg said: There is not even a need for reverse engineering because I openly laid out how vpsbench does its disk IO tests.

    Which isn't sufficient because of the large discrepancies that I keep seeing and that you're unwilling to address in any capacity.

    @jsg said: So, why did you do it anyway?
    @jsg said: it's about attacking me.

    I stuck to reproducible facts, you're welcome to run these tests yourself. But as I said, you keep writing essays with the hope of one-upping me, ignoring the fact that I myself said I wanted to avoid speculation.

    As they say, "reading is an essential skill". Read the results, run the tests, you'll see that there's no attack, just that your benchmark is wrong.

    @jsg said: Hahahaha! Thanks for the laugh!

    I'm glad I could make your day better :smile:

  • Shows us the tool of kings. Show us jsg benchmark script source code!

  • @LTniger said:
    Shows us the tool of kings. Show us jsg benchmark script source code!

    You will only get PMS answers such as these:
    https://www.lowendtalk.com/discussion/comment/3242037#Comment_3242037
    https://www.lowendtalk.com/discussion/comment/3267793#Comment_3267793

Sign In or Register to comment.