Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


jsg, the "Server Review King": can you trust him? Is YABS misleading you? - Page 9
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

jsg, the "Server Review King": can you trust him? Is YABS misleading you?

1567911

Comments

  • @jsg said: .. but that creates just an illusion because it's not really accessing file sectors all over the place..

    This piqued my interest. Surely with solid state storage devices, the actual physical location is of no consequence? Whether a file bridges sectors or is contiguous should have no bearing on access/retrieval time; it is the storage controller's (and chosen file system's) methodology in indexing the next segment in the file sequence.
    Just asking. :|

  • jsgjsg Member, Resident Benchmarker

    @default said:

    @jsg said:
    [...] But this thread is about discrediting me and my work and in a very biased (from the get go) and unfair way.

    No. This thread is related to your benchmarks, not to you as a person, nor to your work in real life.

    Cute.

    You received this spotlight because of the title "Server Review King" in this community; so man up and accept the criticism pointed at the throne (closed-source software) of a king in such democratic environment.

    How many times do I need to say it until it reaches the brains of everybody? I did not choose that title.

    @Andrews said:

    @jsg said:
    Plus I get pi__ed off quickly (meanwhile) when someone really "thinks" that he who...

    maybe you should rather seek professional medical help instead of attacking the whole world again ...

    Cute. You know, I'm actually a friendly and peaceful guy but beyond a certain point I do take on the whole world if needed.

    it is exactly like the Hostsolutions scam thread where for weeks/months you helped the scammer (who provided you with free machines)

    for testing and testing only.

    and you attacked his victims

    Sure, I'm also guilty of the big earthquake in Antarctica. Just search a bit deeper in your ass and you'll be able to grab it and pull it out to add to your last nonsense allegation. And btw, I'm a victim of HS myself.

    @stevewatson301 said:
    The thing is I can respond to jsg's latest comments, but it would quickly become a 10 page dissertation if I were to really go into what his comments are about.

    Blah, blah, blah. Fact is that I did trace both programs and you didn't. And I clearly - and verifiably - demonstrated that you where spewing nonsense. Nice try to wiggle out though.

    To get a rough idea of what the average LETer might know about, and how I can simplify my explanation while keeping it accessible to the interested parties (except for the subject of this thread, who's a know-it-all), I'd like to ask a simple question:

    How many of you have heard and know about these terms?

    Calling conventions
    Thunk functions
    Executable sections
    Global Offsets Table

    Don't look at Google, you can just reply to this thread with a yes or no.

    (a) whohoo, how impressive!
    (b) who tells us that you didn't google those?

  • jsgjsg Member, Resident Benchmarker

    @raindog308 said:

    @jsg said: If you or anyone feels to need an open source benchmark you are free to create one or, if you can't do that, you can pick one of the available open source benchmark scripts/programs - why should your need be an obligation for me?

    It clearly isn't. But publishing a benchmarking tool and not publishing the code for it is kind of odd in my opinion.

    Maybe, but I did open source v1. There are mainly two factors coming together, (a) the total lack of real interest in the source code, and (b) this vile thread that is not about what the OP pretended, but about a hit on me and my work, possibly largely due to a (perceived as) provocative tag that I did not even chose.

    You can't possibly have a commercial or trade secret motive, and publishing a tool that produces an output without letting people see how that output is calculating is kind of black box.

    I clearly specified that. I even published pseudo code. And I traced both programs properly.
    Plus, why would I need my motive to meet certain specs like "trade secret"? I think "the demands are known to be just virtue signalling, belief propaganda, and 'yay, demanding something is free when others need to deliver' - Fuck you!" is a solid motive.

    If I bought some kind of air quality measuring tool, I could at least check other brands of the same kind of tool to see if they're all in spec...with a benchmark, I can't, which makes me want to examine the code.

    You can. Having the source is not the only way.

    I think with this kind of tool, the desire to look under the hood is natural because it's kind of a "mechanic's tool" rather than an application where I can tell if it's working properly or not based on what it does.

    Theory. The reaction to my fulfilling the "we want the source" demands with v.1 was basically non-existent. They cared about demanding, not about actually getting the source code.

    There are the other usual reasons - trusting that the author isn't doing something shifty (not an accusation just a typical human concern), feedback to improve or a way to learn from the code, etc.

    Why would I fuck our community? Because maybe (just a guess) I might get a free VPS worth maybe $50 per year? I'm not a whore and especially not a cheap one. And as far as I know even my foes who think that I'm arrogant, stupid, and whatnot do not doubt my integrity.

    I bolded in my opinion because it's just that.

    Appreciated - as is anyone of the very few in this thread who strive to be fair and balanced. Thank you for that! More I don't ask.

    Thanked by 1Arkas
  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @TimboJones

    Funny that a guy breaking agreements repeatedly paints himself as someone whose view has any value. No further comment deserved.

    @stevewatson301 said:
    [More BS and wiggling]

    There are facts on the table. There is no need to believe you or me.

    @AlwaysSkint said:

    @jsg said: .. but that creates just an illusion because it's not really accessing file sectors all over the place..

    This piqued my interest. Surely with solid state storage devices, the actual physical location is of no consequence? Whether a file bridges sectors or is contiguous should have no bearing on access/retrieval time; it is the storage controller's (and chosen file system's) methodology in indexing the next segment in the file sequence.
    Just asking. :|

    Yes and no, things are more complicated. Yes as in "yes, location was much more an issue with spindles" but also no (among other points) because flash memory isn't just a flat storage space; in fact flash is also written in something like "sectors", and worse, when writing it's always full "sectors" that need to be written. Plus, many flash drives have diverse forms of caches, e.g. DRAM and/or SLC (in front of the big TLC or QLC "main" storage of a flash drive).
    Even worse, also rarely of importance for benchmarks, is the fact that partially written to SSD "sectors" need to be read first and then "merged" with the part to be written. But again, rarely a major factor with benchmarks because benchmarks strongly tend to write in sizes of one or multiple sectors like 4 KB, 64 KB, 1 MB, etc.
    What however is a significant factor is whether a benchmark pseudo random writes e.g. to mmapped file and then writes it out in one or a few swoops or whether it really random writes to the disk device.

    Example: One can create and fill a 2 GB file and then pseudo random write at diverse locations, which actually are just addresses in memory (which obviously is very fast) and then flush the whole file to disk. The proper way to do it however is to write many block to diverse randomly selected locations on the disk device.

    Thanked by 1AlwaysSkint
  • AndrewsAndrews Member
    edited September 2021

    @jsg said:

    @Andrews said:

    @jsg said:
    Plus I get pi__ed off quickly (meanwhile) when someone really "thinks" that he who...

    maybe you should rather seek professional medical help instead of attacking the whole world again ...

    Cute. You know, I'm actually a friendly and peaceful guy but beyond a certain point I do take on the whole world if needed.

    your denial again!

    first, you have spontaneously admitted (nobody asked you about it), that you have anger management issues... when somebody have different opinion (this "thinks" meant that you always know better and you always are right??? that only you really think, other are only "thinks"??? like you wrote here, that only you are REAL soft dev with REAL programming language, others are impostors???)

    and now, you are saying that you are "friendly and peaceful guy"???

    so how many personalities do you have?

    Adolf H. would also say, that he is "friendly and peaceful guy" when painting pictures, but when someone...

    just stop denying obvious facts, and take constructive feedback about your pseudo "benchmark", instead attacking other people/benchmarks (especially open source and industry well known and respected ones) by your endless walls of text

    Thanked by 1itsnotv
  • @Andrews said:

    @jsg said:

    @Andrews said:

    @jsg said:
    Plus I get pi__ed off quickly (meanwhile) when someone really "thinks" that he who...

    maybe you should rather seek professional medical help instead of attacking the whole world again ...

    Cute. You know, I'm actually a friendly and peaceful guy but beyond a certain point I do take on the whole world if needed.

    your denial again!

    first, you wrote by yourself (nobody asked you about it), that you have anger management issues... when somebody have different opinion (this "thinks" meant that you always know better and you always are right??? like you wrote here, that only you are REAL soft dev with REAL programming language, others are impostors???)

    and now, you are saying that you are "friendly and peaceful guy"???

    so how many personalities do you have?

    Adolf H. would also say, that he is "friendly and peaceful guy" when painting pictures, but when someone...

    just stop denying obvious facts, and take constructive feedback about your pseudo "benchmark", instead attacking other people/benchmarks (especially open source and industry well known and respected ones) by your endless walls of text

    literally @jsg just published benchmarks. he did it for everyone, there's no bias here. he has no obligation to provide the source nor explanation.

    jsg is really a fucking genius, he's just not good with people.

    Thanked by 2jsg Arkas
  • @stevewatson301 said: Calling conventions
    Thunk functions
    Executable sections
    Global Offsets Table

    Somewhat familiar with all the terms but I would say reversing this stripped binary without symbols will be quite an effort. HexRays is helpful but can easily take weeks to decompile to proper C

  • drunkendogdrunkendog Member
    edited September 2021

    @jsg said:
    @TimboJones

    Funny that a guy breaking agreements repeatedly paints himself as someone whose view has any value. No further comment deserved.

    @stevewatson301 said:
    [More BS and wiggling]

    There are facts on the table. There is no need to believe you or me.

    Yeah. @stevewatson301 has already provided all of the facts needed to prove that your benchmarks are faulty, and you're the one who is rejecting those facts despite most of this site trying to point them out to you.

    The burden of proof is on you to prove why he was wrong, and you haven't done so satisfactorily - all you did is talk about how Docker IO limits aren't accurate and why your benchmarks (which are multiple orders of magnitude off) were within those limits, and why YABS' benchmarks (which are almost exactly at the IO limits) were artificially limited. You didn't end up providing code that showed that YABS was wrong or that your benchmark was right, you just typed a lot of unsourced and unproven statements.

  • @SirFoxy said:
    literally @jsg just published benchmarks. he did it for everyone, there's no bias here. he has no obligation to provide the source nor explanation.

    jsg is really a fucking genius, he's just not good with people.

    he not only published pseudo "benchmark", but he additionally is spamming community with this pseudo "benchmark" under his questionable KING title (made of blue), and he is making unfair statements about specific services of specific providers

    and when OP made effort to check this pseudo "benchmark" out, and found out that it's results are inaccurate, than jsg instead to clarify issue and to implement necessary fixes, he just went to attacking mode (he vs rest of the world, OP, other benchmarks, other developers, other programming languages)

    he went even further, he said, that because of whole world was not enthusiastic enough about first version of this pseudo "benchmark", now he is restricting access to it's source code for whole world

    questions are: why he triggered war mode when asked about incorrect results? what is he hiding in source code? and what was his hidden agenda with building pseudo "benchmark" with deviated results???

  • PieHasBeenEatenPieHasBeenEaten Member, Host Rep

    The mofo is the king bitches!

  • Not publishing or publishing the source code is solely @jsg's decision.

    @jsg may I suggest:
    1. Ask for volunteer(s) to review the code in one of the target languages (C, C++).
    2. Send the target code (C, C++) to said volunteers only on the condition they don't release that code.

    This way:
    1. Source code is not published to the whole world.
    2. You would know for sure that the volunteers will look at the code so you aren't sharing it for nothing.
    3. Volunteer(s) review and give us their conclusions.

    Thanked by 1Arkas
  • Interesting, wasn’t this the guy defending communism but in real life won’t even share his source code.

  • @redcat said:
    Interesting, wasn’t this the guy defending communism but in real life won’t even share his source code.

    not necessarily defending communism.

    my mans lives in germany.

  • cybertechcybertech Member
    edited September 2021

    @SirFoxy said:
    he's just not good with people.

    im sure we can all agree on this!

    Thanked by 3adly redcat bulbasaur
  • jsgjsg Member, Resident Benchmarker

    @drunkendog said:
    The burden of proof is on you to prove why he was wrong, and you haven't done so satisfactorily [ yada yada]

    Thanks. I had a really good laugh at your attempt to turn your not even reading into a fault of mine.
    I'm sorry to inform you though, that I don't beat up just anyone applying.

    @Andrews said:

    [weird phantasies]

    Sorry, same answer as to (hell, what was his name again? I mean the guy just above).

    @Kassem said:
    Not publishing or publishing the source code is solely @jsg's decision.

    @jsg may I suggest:
    1. Ask for volunteer(s) to review the code in one of the target languages (C, C++).
    2. Send the target code (C, C++) to said volunteers only on the condition they don't release that code.

    Thanks for your constructively meant idea, but
    (a) I'd need to know 2 people knowing C well and being trustworthy. Sorry, but this thread indicates that that doesn't look like a promising attempt.
    (b) Why should I care anyway. After all, this thread strongly demonstrates that many here trusting anything is very little to do with any kind of evidence or proof and very much to do with feeling provoked by a tag I didn't even choose and diverse other emotions and factors.
    (c) it would be cumbersome because the bashers just would come up with new shit like "yeah but that was not for the newest version", so I 'd have to jump through that loop with each version.

    Yeah nah, thanks (I really mean it) but that won't work and frankly, I don't care anymore anyway.

    @redcat said:
    Interesting, wasn’t this the guy defending communism but in real life won’t even share his source code.

    I did not. Unless of course by "communism" you mean any and every country you feel like willy nilly tagging "communist".

  • ArkasArkas Moderator
    edited September 2021

    k

    @redcat said: Interesting, wasn’t this the guy defending communism but in real life won’t even share his source >code.

    What the fuck does communism have to do with this thread. Some here will just throw anything at another member like @jsg because he chooses not to share his project. He is free to do whatever he wants with it, and we are free to agree with it or not. What we are not free to do is pressuring someone to release his source code.

  • yoursunnyyoursunny Member, IPv6 Advocate

    @Andrews said:
    what is he hiding in source code? and what was his hidden agenda with building pseudo "benchmark" with deviated results???

    There is possibly malicious code in the pseudo "benchmark".
    Not only is the results untrustworthy, but also your server may be compromised.
    Stay away from vpsbench.

    And no, you can't discover malicious code by strace.
    In a sophisticated attack, malicious code is triggered only when it's running on the intended victim.

  • ArkasArkas Moderator
    edited September 2021

    @yoursunny said: There is possibly malicious code in the pseudo "benchmark".
    Not only is the results untrustworthy, but also your server may be compromised.
    Stay away from vpsbench

    Care to back up these statements, or are they winnie the pooh standard issue?

    Thanked by 1jsg
  • jsgjsg Member, Resident Benchmarker

    @Arkas said:

    @yoursunny said: There is possibly malicious code in the pseudo "benchmark".
    Not only is the results untrustworthy, but also your server may be compromised.
    Stay away from vpsbench

    Care to back up these statements, or are they winnie the pooh standard issue?

    Nuh, that's just a "smart" attempt foss zealots like to use to force getting at the source code because only that, so their mantra goes, can clean an author of their allegations ...

    Of bloody course that's false. One can, for example, use the "big cannon" and analyse the binary. And btw. how exactly would some code compromise a server without doing syscalls and system library? Using magic, I guess.

    It's not the first time I see that blackmailing scheme from foss fanatics (the first time though with me, my code). Trying that kind of blackmail with me leads to a clear reaction and to that reaction only: "Bugger off, thug! If there still was some chance, however tiny, that I'll open the source code, you just absolutely positively killed it".

    Oh, and don't hold your breath waiting for him to back his heinous claims up, other than saying something like "I said possibly! and that is true" (just as true as "your car could possibly transform into a war robot and attack you". Absolute nonsense, of course, but hey, it's possible ...)

    As for his "advice" ("Stay away from vpsbench"): Absolutely, fine with me, by all means stay away from using vpsbench if you feel like it. I don't care.

    Maybe we should find out how he likes such a vile device turned against him. After all, every software can "possibly" have malicious code in it, even foss. There have been plenty examples/evidence (and he and his "1000 eyes" and "we demand the source" ilk reliably failed to detect it).

    And don't forget to sell your car, your smartphone, your computer! Because, you see, they almost certainly didn't give you the technical drawings, plans, internal design data, etc. Or no, wait, do not use anything with an Arm or x86 processor (or many others) in it because that's all closed source/design stuff!

  • aup and oversell will beat u down
    :wink:

  • @jsg said: Enter fdatasync(), (...) What's the difference? fdatasync flushes data out to the drive while using only O_DIRECT
    @jsg said: yabs/fio does no lseek

    As I've been telling you, none of these things make much of a difference. yabs uses both random reads and writes, but since you're talking about writes seperately I'll use the underlying fio command from yabs but in random writes only first. These are on the c5.xlarge/gp3 disk instances:

    [centos@ip-172-31-80-40 ~]$ fio --name=rand_write_1m_aio --ioengine=libaio --rw=randwrite --bs=1m --iodepth=64 --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_write_1m_aio: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
    ...
    fio-3.19
    Starting 2 processes
    Jobs: 2 (f=2): [w(2)][100.0%][w=125MiB/s][w=124 IOPS][eta 00m:00s]
    rand_write_1m_aio: (groupid=0, jobs=2): err= 0: pid=29246: Wed Sep  8 08:46:18 2021
    write: IOPS=129, BW=129MiB/s (136MB/s)(3964MiB/30667msec); 0 zone resets
    bw (  KiB/s): min=106496, max=150339, per=97.16%, avg=128597.24, stdev=3710.32, samples=121
    iops        : min=  104, max=  146, avg=125.56, stdev= 3.59, samples=121
    cpu          : usr=0.19%, sys=0.15%, ctx=5241, majf=0, minf=20
    IO depths    : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8%
        submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
        complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
        issued rwts: total=0,3964,0,0 short=0,0,0,0 dropped=0,0,0,0
        latency   : target=0, window=0, percentile=100.00%, depth=64
    
    Run status group 0 (all jobs):
    WRITE: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=3964MiB (4157MB), run=30667-30667msec
    
    Disk stats (read/write):
    nvme0n1: ios=0/15874, merge=0/3, ticks=0/1391715, in_queue=1391715, util=99.69%
    

    Let's move over to "ioengine=sync" for lseek() calls, and fdatasync() after each write call:

    [centos@ip-172-31-80-40 ~]$ fio --name=rand_write_1m_sync --ioengine=sync --fdatasync=1 --rw=randwrite --bs=1m --numjobs=2 --size=2G --runtime=30 --gtod_reduce=1 --direct=1 --filename=test.fio --group_reporting
    rand_write_1m_sync: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=sync, iodepth=1
    ...
    fio-3.19
    Starting 2 processes
    Jobs: 2 (f=2): [w(2)][100.0%][w=125MiB/s][w=124 IOPS][eta 00m:00s]
    rand_write_1m_sync: (groupid=0, jobs=2): err= 0: pid=29255: Wed Sep  8 08:47:31 2021
    write: IOPS=129, BW=129MiB/s (136MB/s)(3902MiB/30171msec); 0 zone resets
    bw (  KiB/s): min=126722, max=323950, per=99.45%, avg=131698.71, stdev=12713.29, samples=118
    iops        : min=  122, max=  315, avg=128.56, stdev=12.33, samples=118
    cpu          : usr=0.15%, sys=0.15%, ctx=3905, majf=0, minf=19
    IO depths    : 1=200.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
        submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
        complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
        issued rwts: total=0,3902,0,0 short=3902,0,0,0 dropped=0,0,0,0
        latency   : target=0, window=0, percentile=100.00%, depth=1
    
    Run status group 0 (all jobs):
    WRITE: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=3902MiB (4092MB), run=30171-30171msec
    
    Disk stats (read/write):
    nvme0n1: ios=0/15502, merge=0/1, ticks=0/190323, in_queue=190323, util=99.78%
    

    And your benchmark for reference:

    [centos@ip-172-31-80-40 ~]$ ./vpsb-lx64-210a -d
    [T] 2021-09-08T08:51:53Z
    ----- Disk -----
    [D] Total size per test = 2048.00 MB, Mode: Sync
    ..... [D] Wr Seq:  13.70 s ~   149.48 MB/s
    ..... [D] Wr Rnd:  14.68 s ~   139.50 MB/s
    ..... [D] Rd Seq:   0.33 s ~  6115.59 MB/s
    ..... [D] Rd Rnd:   0.31 s ~  6703.52 MB/s
    

    No significant difference between the two yabs async/lseek+fdatasync tests, which are both capped at 129 MB/s close to the advertised 125 MB/s, and your test is the one that's off by a large margin, being at 139 and 149 MB/s.

    @jsg said: So, my vpsbench is "spamming the kernel with clock_gettime syscalls" and I was absolutely wrong when I "lectured" you?

    Unfortunately for you, reading doesn't seem to be your strong suit. Let's go back to my original comment where I made this claim:

    In addition, you're sidestepping the main issue too, which is why are you making so many clock_gettime() API in a CPU benchmark.

    The key word here is "CPU benchmark", unfortunately you only stopped reading mid-sentence. For an I/O benchmark the overhead of a simpler syscall such as clock_gettime() won't matter anyway since the I/O syscall is just more expensive and only more so in direct mode. So, @Falzo, no, "spamming" in the I/O benchmark won't work.

    @jsg said: Unlike you I actually did look at traces of both

    I did compare your benchmark with geekbench! It is closed source but they do go into what kinds of workloads they run.

    I'll take a short snippet out of their navigation benchmark. The only thing I see is brk calls and through ltrace, I can confirm these are because of malloc() being called. Quite likely to make space for all the 200,000 nodes and 400,000 edges to run Dijkstra:

    On the other hand, you can see the profile of syscalls from your benchmark looks like.

    @jsg said: I handed you an excellent opportunity by providing pseudo code for my disk testing.

    That only works when you have at least have the code and debugging symbols. Of course, you can easily step it in GDB and most of LET thinks "wow, this @jsg guy is debugging a binary!", whereas I have it much harder, having to look at your code in a disassembler or debugger in raw assembly instructions and figuring out what your program does, that too for a relatively large program and not a small snippet of assembly. Your providing "pseudocode" means nothing in this context!

    It's not lost on me that you're trying to equate debugging with symbols and code and debugging without, but that would be a lie. For someone who doesn't understand the difference, essentially, you're handing a Chinese novel to someone who knows no context of the story and nothing of the language, and asking them to read and understand the novel given a one line hint "this story is about a farmer and his struggles".

    Of course there are people who regularly do reverse engineering and do figure out stuff like this, malware researchers come to mind. But a discussion of that is entirely useless in a benchmarking thread, in the same way you can tell the difference between a turd sandwich and a hamburger without being a chef!

    @jsg said: reverse engineering binaries created by Nim (...) is somehow more difficult to reverse engineer than code written in another compiled languages

    Have you ever looked at your own program without debugging symbols and the source, compared to a relatively complex C program?

    Put vpsbench in a disassembler (those following along can probably use Cloud Binary Ninja) and see what "main" looks like. It leads to a number of medium-sized setup functions, then I also believe Nim puts its own wrapper around most of the I/O syscalls that your program uses. Compare that to a C program, how many levels of such indirections do you see there?

    Now I could try to figure all of that out, renaming symbols and generally mapping out your program. But I'm not sure how relevant that is to a discussion of your program, which prints the wrong values, which is an indisputable fact.

    @jsg said: No, you are not a software developer, at least not at the level you try to make us believe.

    Thanks but no thanks for the invitation to your dick measuring contest. I'll let you win that one, it's the only one thing you're good at.

  • jsgjsg Member, Resident Benchmarker

    I'll be generous and respond to at least some of the nonsense you spread. All it really shows is that you are way out of your depth and act ridiculously unprofessionally (e.g. carefully selected parameters for fio but using the candidate you want to smear in the most basic way possible).

    @stevewatson301 said:

    @jsg said: Enter fdatasync(), (...) What's the difference? fdatasync flushes data out to the drive while using only O_DIRECT
    @jsg said: yabs/fio does no lseek

    As I've been telling you, none of these things make much of a difference. yabs uses both random reads and writes, but since you're talking about writes seperately

    As I said you do not even understand what you are talking about. Your fio parameters lead to a benchmark that works totally different from mine. And btw. you obviously do not even use vpsbench properly.
    You trust the yabs/fio results for one simple reason: You want to trust them and to sell them as some kind of "reference".

    @jsg said: Unlike you I actually did look at traces of both

    I did compare your benchmark with geekbench! It is closed source but they do go into what kinds of workloads they run.

    So do I. I laid out in detail how my disk benchmark works.
    Funny side note: vpsbench v2 closed source -> evil!!! geekbench closed source -> no problem, that's OK, hey they give you a PDF.

    @jsg said: I handed you an excellent opportunity by providing pseudo code for my disk testing.

    That only works when you have at least have the code and debugging symbols. Of course, you can easily step it in GDB and most of LET thinks "wow, this @jsg guy is debugging a binary!", whereas I have it much harder, having to look at your code in a disassembler or debugger in raw assembly instructions and figuring out what your program does, that too for a relatively large program and not a small snippet of assembly. Your providing "pseudocode" means nothing in this context!

    And again: WRONG. One does not need the source code and/or debugging symbols for [l|s]trace. In fact I traced a normal release (no debug symbols) binary of vpsbench and fio.

    @jsg said: reverse engineering binaries created by Nim (...) is somehow more difficult to reverse engineer than code written in another compiled languages

    Have you ever looked at your own program without debugging symbols and the source, compared to a relatively complex C program?

    Yes, regularly. In fact in my job I regularly even run verifiers and analyzers over the generated C code of multiple languages generating C code (incl. sometimes Nim). Btw, one can generate call flow graphs out of some modern compilers intermediate code.

  • @stevewatson301 said: So, @Falzo, no, "spamming" in the I/O benchmark won't work.

    what do you mean by that? I only picked the wording because it was kinda fought over from the beginning. wasn't intended as a pro or contra to anyone.

    my point was rather an interest in the way of getting/calculating the time, as that seems to be what's being handled differently (according to the traces)
    hence the question about variables in userspace and their reliability timewise as well as the suggestion of changing (time measuring) methods within the bench for a test and to see if that would eventually lead to different results. (and not just in the range of negligible tolerances...)

    my scenario here obviously that wherever the bench gets its timing from does not get updated in time under certain circumstances (depending on OS and IO-wait? 😂) and therefor a lot of those slices have 0 as time difference to add to the overall time.

    call me naive or dumb, as said I am no developer as such. just thinking about possible causes for seeing these huge deviations that seem to follow no clear pattern.
    however, maybe it's just me who would have jumped at such things and taken the opportunity to prove others wrong by simply creating some test version that use different methods... each to their own 🤷‍♂️

    Thanked by 1vimalware
  • jsgjsg Member, Resident Benchmarker

    @Falzo said:

    @stevewatson301 said: So, @Falzo, no, "spamming" in the I/O benchmark won't work.

    what do you mean by that? I only picked the wording because it was kinda fought over from the beginning. wasn't intended as a pro or contra to anyone.

    my point was rather an interest in the way of getting/calculating the time, as that seems to be what's being handled differently (according to the traces)
    hence the question about variables in userspace and their reliability timewise as well as the suggestion of changing (time measuring) methods within the bench for a test and to see if that would eventually lead to different results. (and not just in the range of negligible tolerances...)

    my scenario here obviously that wherever the bench gets its timing from does not get updated in time under certain circumstances (depending on OS and IO-wait? 😂) and therefor a lot of those slices have 0 as time difference to add to the overall time.

    call me naive or dumb, as said I am no developer as such. just thinking about possible causes for seeing these huge deviations that seem to follow no clear pattern.
    however, maybe it's just me who would have jumped at such things and taken the opportunity to prove others wrong by simply creating some test version that use different methods... each to their own 🤷‍♂️

    You are not wrong and your question was not stupid. There are differences like e.g. the one I mentioned. And there are also differences that can be quite significant, like for example some timer not counting during certain states. I happened to do some work in/for the finance industry where sometimes even sub-microsecond precision is of concern, and I can tell you that a lot of work and research has been done in the field of timers and timing. In the millisecond range those differences usually aren't a problem (anymore on modern systems) but in the microsecond range there are quite a few intricacies lurking. Two name just 2: the physical base; generally speaking the crystals used in most mainboards are mediocre if not even crappy (but cheap). And: different architectures, in fact even different generations of the same processor family offer different possibilities, some of which are actually useful but others aren't worth much (in terms of precise and high speed timing).

  • "vpsbench" seems so complex, calculating precise stuff on one millionth part of a second... this is very scary stuff... I am afraid of actually crashing my VPS, or steal too much CPU from Virmach.

    This way over my league. So... anyone fancy some popcorn?

  • TeYroXTeYroX Member
    edited September 2021

    @default said:
    "vpsbench" seems so complex, calculating precise stuff on one millionth part of a second...

    vpsbench is indeed nasa technology

  • jsgjsg Member, Resident Benchmarker

    @default said:
    "vpsbench" seems so complex, calculating precise stuff on one millionth part of a second... this is very scary stuff... I am afraid of actually crashing my VPS, or steal too much CPU from Virmach.

    This way over my league. So... anyone fancy some popcorn?

    Nuh, don't worry. Yes, any benchmark worth its salt does care about and does high precision timing (incl. fio and vpsbench), in part because it also does its disk tests in many slices/blocks each of which needs to be properly timed, but you as a user need not be concerned. Just use a (reasonably good) benchmark and don't worry about the technical nitty gritty.

Sign In or Register to comment.