Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


jsg, the "Server Review King": can you trust him? Is YABS misleading you? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

jsg, the "Server Review King": can you trust him? Is YABS misleading you?

1356711

Comments

  • @stevewatson301 said:

    @LTniger said:
    Shows us the tool of kings. Show us jsg benchmark script source code!

    You will only get PMS answers such as these:
    https://www.lowendtalk.com/discussion/comment/3242037#Comment_3242037
    https://www.lowendtalk.com/discussion/comment/3267793#Comment_3267793

    hahahahhaahahahahhahaaha

  • jsgjsg Member, Resident Benchmarker

    @stevewatson301

    No response deserved. You are in lala phantasy land attack mode again. Just because you absolutely positively believe something, it isn't so. And the fact that the usual gang takes your side doesn't change anything either.

  • @jsg said:
    @stevewatson301

    No response deserved. You are in lala phantasy land attack mode again. Just because you absolutely positively believe something, it isn't so. And the fact that the usual gang takes your side doesn't change anything either.

    Oi c'mon, don't be salty for borta slip. No one holds the score (which is 1:0 by the way).

    Will you improve your bench tool on given feedback?

  • bulbasaurbulbasaur Member
    edited September 2021

    @jsg said:
    @stevewatson301
    No response deserved. You are in lala phantasy land attack mode again. Just because you absolutely positively believe something, it isn't so. And the fact that the usual gang takes your side doesn't change anything either.

    I've been willing to provide any and all results that you've asked for me, and yet you continue engaging in accusations and mudslinging.

    The best response that I've been able to elicit from you is a theoretical explanation of why your benchmark may differ from YABS, which promptly fell apart once I provided more evidence. I made a small mistake while writing my initial post (which didn't invalidate the observations in any way), which I was more than happy to correct (it's waiting to be edited by a mod though).

    Can you just man up, run the same tests that I've run, yourself, and actually address the issues that I've talked about?

    Thanked by 1adly
  • jsgjsg Member, Resident Benchmarker

    @LTniger said:
    Shows us the tool of kings. Show us jsg benchmark script source code!

    No, won't happen. I was fooled once into providing the source (for v.1) after lots of noise - and virtually nobody downloaded it.

    But as I'm a nice person I'll spare you the effort of tracing the binary and provide pseudo code.

    // Disk test module of vpsbench (pseudo code)
    
    var SliceSize              // Size of one round
    var SliceCount             // Number of rounds
    
    oneDWrite(tstFile, tbuf, pos)
    {
       // writes one slice of `tbuf`data to file handle `tstFile`
       // at position `pos`
       // parameters:
       //    tstFile: file handle of test file
       //    tbuf:    buffer with random data
       //    pos:     position in file to write to
       startTime = getUsectime()     // get usec precise start time
       if pos != 0xffffffff    // unless appending
          seek(tstFile, pos)   // set location in file to position `pos`
       wres = write(tstFile, tbuf, SliceSize) // write one slice out
       endTime = getUsectime()     // get usec precise end time
       return calcTimeDiff(startTime, endTime)
    }
    
    proc oneDRead(tstFile, tbuf, pos)
       // same as oneDWrite but reading instead
    
    doDWriteTest(tstFile, pause, isRnd)
    {
       // runs OneDWrite SliceCount times on file handle `tstFile` and pauses
       // `pause` ms after each cycle.
       // If `isRnd` is true, runs random tests, or else runs sequential tests
       // parameters:
       //    tstFile: file handle of test file
       //    dpause:  pause in between slices [ms]
       var timeTotal = 0    // total time taken so far
       var timeSlice = 0    // time taken in current slice
    
       do SliceCount times {
          tBuf = fillRndBuf()  // fill buffer with fresh random data
          if isRnd == true {
             rndPos = calcRandomSector()   // compute a random position in file
             timeSlice = oneDWrite(tstFile, tbuf, rndPos) 
          }
          else
             timeSlice = oneDWrite(tstFile, tbuf, 0xffffffff) // just append at end of file
          timeTotal += timeSlice                              // add cycle time used to total
          if pause > 0
             sleep(pause)      // sleep `pause` ms before next round
       }// end of loop
       return timeTotal
    }
    
    doDReadTest(tstFile, pause, isRnd) 
       // same as doDWriteTest but reading instead of writing
    
    doOneDiskTest(tstPath, dpause, sCount, sSize, doSync)
    {
       // Runs all tests, WR and RD, sequential and random, buffered or sync.
       // parameters:
       //    tstPath: the full test file path
       //    dpause:  pause in between slices [ms]
       //    sCount:  number of sclices
       //    sSize:   size of each slice
       //    doSync;  Flag. Buffered or sync?
       # --- write tests ---
       tstFile = open(tstPath, doSync, write_mode)
       wsResult = doDWriteTest(tstFile, dpause, seq_mode)  // seq. write test
       seek(tstFile, 0)     // put file pointer back to start of test file
       wrResult = doDWriteTest(tstFile, dpause, rnd_mode)  // random write test
       close(tstFile)       // close test file (but keep contents)
       # --- read tests ---
       tstFile = open(tstPath, doSync, read_mode)
       rsResult = doDReadTest(tstFile, dpause, seq_mode)   // seq. read test
       seek(tstFile, 0)     // put file pointer back to start of test file
       rrResult = doDReadTest(tstFile, dpause, rnd_mode)   // random read test
       close(tstFile)       // done, close test file
       removeFile(tstPath)  // and remove it from drive
       showResults(...)
    }
    
  • jarjar Patron Provider, Top Host, Veteran

    Almost no one downloads source so you feel fooled by it, so much so that you'd rather rewrite a selective portion of it as pseudocode than provide the source? The only thing you could be fooled into in that case is wasting time, only that right there wastes more.

  • yoursunnyyoursunny Member, IPv6 Advocate
    edited September 2021

    @jsg said:
    tstFile = open(tstPath, doSync, read_mode)

    What flags are in read_mode?
    This majorly affects performance.

    seek(tstFile, pos) // set location in file to position pos
    wres = write(tstFile, tbuf, SliceSize) // write one slice out

    Latest operating systems have Linux kernel 5.4 or higher, so you should use io_uring or at least libaio.

  • bulbasaurbulbasaur Member
    edited September 2021

    @yoursunny said: What flags are in read_mode?

    Two files are opened by the disk test:

    openat(AT_FDCWD, "/tmp/test.dat", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_NOATIME, 0600) = 3
    openat(AT_FDCWD, "/tmp/test.dat", O_RDONLY|O_DSYNC|O_NOATIME) = 3
    

    Here's the syscall profile for the first one:

    [pid 14865]      0.000027 openat(AT_FDCWD, "/tmp/test.dat", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_NOATIME, 0600) = 3
    [pid 14865]      0.009076 write(1, ".", 1) = 1
    [pid 14865]      0.001386 write(3, "...", 4194304) = 4194304
    [pid 14865]      0.009271 fdatasync(3)  = 0
    [pid 14865]      0.000042 nanosleep({tv_sec=0, tv_nsec=2000000}, 0x7ffed0244fc0) = 0
    [pid 14865]      0.003350 write(3, "...", 4194304) = 4194304
    [pid 14865]      0.009901 fdatasync(3)  = 0
    [pid 14865]      0.000036 nanosleep({tv_sec=0, tv_nsec=2000000}, 0x7ffed0244fc0) = 0
    [pid 14865]      0.003330 write(3, "...", 4194304) = 4194304
    [pid 14865]      0.008510 fdatasync(3)  = 0
    [pid 14865]      0.000045 nanosleep({tv_sec=0, tv_nsec=2000000}, 0x7ffed0244fc0) = 0
    

    and the other one:

    [pid 14865]      0.000070 openat(AT_FDCWD, "/tmp/test.dat", O_RDONLY|O_DSYNC|O_NOATIME) = 3
    [pid 14865]      0.000058 write(1, ".", 1) = 1
    [pid 14865]      0.000034 read(3, "...", 4194304) = 4194304
    [pid 14865]      0.001097 nanosleep({tv_sec=0, tv_nsec=2000000}, 0x7ffed0244fe0) = 0
    [pid 14865]      0.002094 read(3, "...", 4194304) = 4194304
    [pid 14865]      0.000943 nanosleep({tv_sec=0, tv_nsec=2000000}, 0x7ffed0244fe0) = 0
    [pid 14865]      0.002087 read(3, "...", 4194304) = 4194304
    [pid 14865]      0.000947 nanosleep({tv_sec=0, tv_nsec=2000000}, 0x7ffed0244fe0) = 0
    
    Thanked by 2adly vimalware
  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @jar said:
    Almost no one downloads source so you feel fooled by it, so much so that you'd rather rewrite a selective portion of it as pseudocode than provide the source? The only thing you could be fooled into in that case is wasting time, only that right there wastes more.

    As I said, I'm a friendly person and that also means that I usually am willing to do something for peace. I'm also in principle willing to show my code, but I've learned that one must never allow a foe (and someone who openly states that his aim is to attack me in a hopefully successful way and his accomplices are to be considered foes) to force their rules upon oneself. So, I don't provide the full source code anymore.

    @yoursunny said:

    seek(tstFile, pos) // set location in file to position pos
    wres = write(tstFile, tbuf, SliceSize) // write one slice out

    Latest operating systems have Linux kernel 5.4 or higher, so you should use io_uring or at least libaio.

    Nope. I disagree because AIO translates to get results faster - but the drive doesn't get faster, it just shortens the time the program wastes on waiting for the IO.
    But you are right insofar that many applications can profit from AIO, but for a AM benchmarks there is virtually nothing to gain but in fact some disadvantages, unless one wants to test how many IO requests one can throw at a system before it gets to its knees; but even that brings doubts with it.
    For dedi benchmarking though I'm thinking about it as it might make sense.

  • adlyadly Veteran
    edited September 2021

    @stevewatson301 said: Two files are opened by the disk test:

    It's opening a single file for write and read at the same time.

  • jarjar Patron Provider, Top Host, Veteran

    While it's true that you should never let the enemy define the rules of battle, I've also found that unprecedented transparency rarely goes unrewarded. Admittedly, I said rarely because it isn't "never."

  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @adly said:

    @stevewatson301 said: Two files are opened by the disk test:

    It's opening a single file for write and read at the same time.

    Wrong. My benchmark first opens a file for writing and then it (closes it and) opens the same file again for reading.

    @jar said:
    While it's true that you should never let the enemy define the rules of battle, I've also found that unprecedented transparency rarely goes unrewarded. Admittedly, I said rarely because it isn't "never."

    I wouldn't go as far as saying that transparency needs to be deserved, but IMO it certainly shouldn't be wasted at nasty foes.
    In a normal civilized setting though I agree with you and tend to act accordingly.

  • adlyadly Veteran
    edited September 2021

    @jsg said:

    @adly said:

    @stevewatson301 said: Two files are opened by the disk test:

    It's opening a single file for write and read at the same time.

    Wrong. My benchmark first opens a file for writing and then it (closes it and) opens the same file again for reading.

    The strace provided shows the calls occurring simultaneously:

    openat(AT_FDCWD, "/tmp/test.dat", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_NOATIME, 0600) = 3
    openat(AT_FDCWD, "/tmp/test.dat", O_RDONLY|O_DSYNC|O_NOATIME) = 3
    

    Edit: @stevewatson301 confirms it was clipped, my apologies.

  • @adly said: The strace provided shows the calls occurring simultaneously:

    This was just an extract of the two syscalls, @jsg is right in this case (for once!)

    Thanked by 2adly TimboJones
  • jarjar Patron Provider, Top Host, Veteran

    The strace was clipped, there may have been a close in between.

    Thanked by 1adly
  • jsgjsg Member, Resident Benchmarker
    edited September 2021

    @adly said:

    @jsg said:

    @adly said:

    @stevewatson301 said: Two files are opened by the disk test:

    It's opening a single file for write and read at the same time.

    Wrong. My benchmark first opens a file for writing and then it (closes it and) opens the same file again for reading.

    The strace provided shows the calls occurring simultaneously:

    openat(AT_FDCWD, "/tmp/test.dat", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_NOATIME, 0600) = 3
    openat(AT_FDCWD, "/tmp/test.dat", O_RDONLY|O_DSYNC|O_NOATIME) = 3
    

    Then something is wrong with that quote. Maybe he extracted only the open calls to answer a question (for the flags used) from a longer trace log but my program opens only one file at a time.

    Thanks btw for demonstrating that (not only) you don't give a flying f_ck for the truth or the source code. You challenge everything I say but take everything that seems to show me wrong at face value.

  • @jar said: The strace was clipped, there may have been a close in between.

    @stevewatson301 said: This was just an extract of the two syscalls, @jsg is right in this case (for once!)

    These aren't close calls, I should have posted the entire thing to clear up. The full strace is available here: https://paste.ee/p/ncFwr

  • @jsg said:

    @adly said:

    @jsg said:

    @adly said:

    @stevewatson301 said: Two files are opened by the disk test:

    It's opening a single file for write and read at the same time.

    Wrong. My benchmark first opens a file for writing and then it (closes it and) opens the same file again for reading.

    The strace provided shows the calls occurring simultaneously:

    openat(AT_FDCWD, "/tmp/test.dat", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_NOATIME, 0600) = 3
    openat(AT_FDCWD, "/tmp/test.dat", O_RDONLY|O_DSYNC|O_NOATIME) = 3
    

    Then something is wrong with that quote. Maybe he extracted only the open calls to answer a question (for the flags used) from a longer trace log but my program opens only one file at a time.

    @stevewatson301 has clarified - it was a clipped quote. I apologise for making an assumption without checking.

  • @jsg said: Thanks btw for demonstrating that (not only) you don't give a flying f_ck for the truth or the source code. You challenge everything I say but take everything that seems to show me wrong at face value.

    Not sure where this is coming from - I haven't mentioned source code here and have apologised twice now for making an assumption and for being wrong.

  • jarjar Patron Provider, Top Host, Veteran

    Here's a hot take:

    1. There were flaws in this calling out of @jsg
    2. There are flaws in the measuring of read speed in the benchmark

    But honestly who cares. See the crazy results, mentally tune them out.

  • jsgjsg Member, Resident Benchmarker

    @adly said:
    @stevewatson301 has clarified - it was a clipped quote. I apologise for making an assumption without checking.

    Again: you don't give a flying f_ck for the truth or the source code. You challenge everything I say but accept everything that seems to show me wrong at face value.
    He, your accomplice, had to say that you were wrong. My clarification was totally ignored as was my pseudo code.

  • jsgjsg Member, Resident Benchmarker

    @jar said:
    Here's a hot take:

    1. There were flaws in this calling out of @jsg
    2. There are flaws in the measuring of read speed in the benchmark

    But honestly who cares. See the crazy results, mentally tune them out.

    What flaw? Tell me about it. I'm really interested in constructive feedback.

    Thanked by 1jar
  • @jsg said: He, your accomplice,

    Why do you assume he is @stevewatson301 accomplice? I understand you are in defensive mode here but that doesn't mean you have to discourage everyone here with such BS.

    Thanked by 1adly
  • adlyadly Veteran
    edited September 2021

    @jsg said:

    @adly said:
    @stevewatson301 has clarified - it was a clipped quote. I apologise for making an assumption without checking.

    Again: you don't give a flying f_ck for the truth or the source code. You challenge everything I say but accept everything that seems to show me wrong at face value.
    He, your accomplice, had to say that you were wrong. My clarification was totally ignored as was my pseudo code.

    I have no idea what you are talking about. I have never conversed with @stevewatson301 beyond public forum interaction, which is minimal if at all, but OK. Yes, he told me I made an incorrect assumption, and when you pointed it out I updated my post with an apology.

    I did not read your pseudo code because it's pseudo code, it's meaningless as to how the benchmark works in practice. You're right about one thing though, I don't give a flying fuck about the source code - I never asked for it.

    Thanked by 2iKeyZ TimboJones
  • bulbasaurbulbasaur Member
    edited September 2021

    -- deleted --

  • jarjar Patron Provider, Top Host, Veteran

    @jsg said:

    @jackb said:

    @jsg said:
    Now, let's look at your methodology. Basically you are saying that vpsbench must be wrong because Amazon certainly wouldn't lie. Well, that's one way to look at things. But it's not mine. A benchmark is about getting the data, the facts, not about trusting company

    It looks heavily like your read tests are hitting RAM to me. There's no way Amazon are giving people 8GB/s disk read (sync).

    Evidently, yes. But testing VMs necessarily means testing a virtual machine which translates to e.g. diverse sorts and levels of caches beneath the VM.

    @some others
    Of course you are back at going against the person ...

    Re. "King": As I've already said, "Server Review King" was not a tag chosen or desired by me. I was a bit shocked myself when I saw it the first time. But I didn't and will not complain to @jbiloh, because I'm certain that his intentions were friendly and good and because after all it's just a tag. As long as there is "reviewer" in it I'm OK with it.

    This was where you quoted @jackb and agreed something screwy is probably happening there @jsg. That's what I'm referring to.

    Thanked by 2adly bulbasaur
  • adlyadly Veteran
    edited September 2021

    @stevewatson301 said:

    @samm said:

    @jsg said: He, your accomplice,

    Why do you assume he is @stevewatson301 accomplice? I understand you are in defensive mode here but that doesn't mean you have to discourage everyone here with such BS.

    It's just like those SJW rants against the "patriarchy" and the like. Guess I and @adly are part of the "yabsarchy".

    I only discovered his benchmark tool in the Contabo thread a week or so ago, yet somehow I go to the core of some long running anti-jsg conspiracy. 🤷‍♂️

  • dedicatserver_rodedicatserver_ro Member, Host Rep
    edited September 2021

    @jsg said: Please do not confuse what my benchmark does and explanations for certain results.

    At the end of the day my benchmark measures what one can expect no matter the details.

    • You still don't understand that you have to go back to school and learn how a disk, SSD, NVMe ... storage ....virtualization or cloud..works . You say that you are testing a virtual disk without realizing that there are actually many details which matters ...
      It's easier to ask the provider what disk it has and look at the technical data from the manufacturer if the rest doesn't matter ;)
      But as I've said many times when you don't know how it works you don't even know what you're testing ....

    • what can you learn at least from @stevewatson301 when you take test need to have a "Prerequisites" = the conditions under which you take the test , and if you do comparative tests the conditions must be the same

    @all

    • for members considering that you use a general app VPS , YABS is enough - quite correct .
    • considering the provider AUP it really doesn't matter the small differences in an NVMe Read/Write ( especially since it is a virtualized disk that is written and read all the time, therefore the read only or write only test is irrelevant - you test on a small part of NVMe)
Sign In or Register to comment.