Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


LEB/LET benchmarks - vps benchmark 2.x
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

LEB/LET benchmarks - vps benchmark 2.x

jsgjsg Member, Resident Benchmarker
edited August 2020 in Reviews

I'm pleased to announce

  • the official start of a series of reviews and benchmarks which I have been asked to do by @jbiloh.
  • the availability of a largely new and significantly enhanced version 2.x of my benchmark software vpsbench.

I would have very much liked jbiloh to pick the first provider and product to be reviewed as a small token of appreciation for his commitment and extensive work to make LET great again. However even after (mildly) pushing him he didn't react. Obviously he wants to stay absolutely neutral. While this spoils my little "thank you" plan I can accept and even appreciate his decision to stay away from even just the appearance of not being perfectly neutral.
So instead a heartfelt, sincere "Thank you, Jon Biloh. for investing so much in LEB/LET, for your work, your commitment, and your trust in us" (and for tolerating the beatings you took at the (new) start!).

Well, "32nd of never", as a particularly stubborn jsg hater put it a week or two ago in my interview thread, has arrived (and the wannabe enemy, as usual, is wrong).

vpsbench 2.x is completed and ready to rock

and will be made publicly available in the coming days.

Here is the help page created by "vpsb -h" to give you a first taste:

Usage: vpsb [Options]
   (see readme for details) and Options are
      -h    or --help          for this help
      -b    or --buffdisk      runs disk tests buffered instead of sync,direct
      -c    or --cputest       run proc/mem test
      -C    or --cpucount=x    proc/mem test Rounds (def. 128)
      -m    or --memsize=x     proc/mem test mem. Size [MB] (def. 16)
      -d    or --disktest      run disk test
      -D    or --diskcount=x   disk test rounds (def. 512)
      -S    or --dslicesize=x  disk test slice size [MB] (def. 4)
      -i    or --interval=x    test interval (ms)
      -n    or --nettest[=TF]  run network test (using Target_File)
                               where 'TF' is a target file containing the
                               URLs for benchmarking (def. 'ntargets')
      -N    or --noicmpping    do not ICMP ping (which needs root)
      -p    or --dtpath=path   path for disk test (def. /tmp)
      -q    or --quiet         run quiet, keep stdout clean
      -s    or --sysinfo       show system info
      -v    or --version       for version info
Note that -scdn is assumed by default. Use c, d, n, and s options to only run
specific tests.

Besides basically completely rewriting vpsb to make it more modular there is interesting new functionality, too.
First re modular: With v 2.x I can pull out any module, e.g. disk testing, and turn it into it's own executable with very little work. This is particularly helpful as some other benchmarkers (e.g. @poisson) have shown interest in using my code as a "library" in their own (shell based AFAIK) benchmarks.

Disk testing can now be done either in "normal" buffered mode or in direct/sync mode. This point seems small but is quite important. Explanation: As I have repeatedly explained my interest (when creating vpsb) was to test the system that is, e.g. the speed of the disks; even more, I wanted to have a tool to "break through" caching layers. For that direct/sync access is needed. On the other hand, and just as valid, people might be interested in the "what do I get in speed?" question.For that good caching is a plus although it may in most normal use cases hide potentially sub-par hardware.

This is particularly true with linux because linux mercilessly caches disks; in fact I have even seen linux (e.g. 4.1x kernels) simply ignoring direct and sync flags. One often cited official reason for that being that soft logging file systems (e.g. ext4 ) can't support those flags. My personal and subjective impression is that linux is too focussed on it's competing with Windows.
So, it's for good reasons that I will continue to do my benchmarking using FreeBSD because it does honor direct/sync requests which allows a much better insight into the hardware.

Plus, both the processor & memory and the disk tests now allow more fine grained control both of the rounds and the size of a single test round.

Another significant change/enhancement is that vpsb v.2 now also does both a (normal) ping and a "web ping". This allows to better recognize and judge some issues (e.g. ICMP "optimized" routing) and problems (e.g. a slow http server). The (normal) ping which requires the benchmark to run as root can be disabled with a simple switch. In addition the network benchmark module now also shows the http code if it's not 200, so http problems with a target server can be more easily diagnosed.

Announement to providers

A couple of weeks ago a number of providers have announced their interest in being reviewed/benchmarked. The order of my first reviews/benchmarks will be the following:

  • First: Those providers should re-contact me (either here or by PM) and I will "choose" the first one to be tested by random out of those re-contacting me. AFAIC the first test can start tomorrow.
  • Next: I'll repeat the above random choice (using /dev/urandom as "dices") for the rest of the providers re-contacting me, one by one.
  • Others: Providers who contact me (and have not yet contacted me some weeks ago) will be put on my ToDo list and chosen using the same random process.

Please note that I will take the liberty to "insert" reviews/benchmarks of providers of my choice in case I'm particularly interested.

Requirements:

I do not care whether you want a dedi or VPS or VDS and what configuration reviewed/benchmarked. What I do require is min. 256 MB RAM and min 7.5 GB disk space and a convenient way to use or install FreeBSD (min. 12.0). So, I will not (or only as an exception in a few special cases) benchmark/review OpenVZ or linux container VPS.
I offer 2 kinds of reviews/benchmarks, short term and one month. Short term means min. (and typ.)3 days and max. 1 week. The test system will be used only and exclusively for benchmarking. Please be sure to create a test account for me as I usually will not create an account myself just for reviewing/benchmarking you
Feel free to ask if you have any questions I haven't addressed.

LET users:

There is one last point open: vpsb does show all flags but I'd like to show some particularly important (to you) flags like e.g. obviously 'AES' more prominently. So, please tell me which flags you are particularly interested in.

«13

Comments

  • dustincdustinc Member, Patron Provider, Top Host

    Awesome work! We'll be reaching out.

    Thanked by 2jsg kyaky
  • seriesnseriesn Member
    edited August 2020

    Hi @jsg first an foremost, this brings back memories, as you were the first one to actually give 2 cents about a small brand and share your honest public opinion about my small little company. I will never stop appreciating all the valuable feedback, given by you, which helped us become a better version of our self!

    This would be an amazing opportunity for any provider, who is up for some honest feedback. Word of advice, @jsg can be brutally honest and might hurt your feelings, but if you take it as it is and listen to it, it can and will help your business :). Just be prepared ;)

    Thanked by 1jsg
  • jsgjsg Member, Resident Benchmarker

    @seriesn said:
    Hi @jsg first an foremost, this brings back memories, as you were the first one to actually give 2 cents about a small brand and share your honest public opinion about my small little company. I will never stop appreciating all the valuable feedback, given by you, which helped us become a better version of our self!

    Haha, and the result was that I liked your (benchmarked!) VPS so well that I actually purchased one. I'm btw. still happy with it.

    This would be an amazing opportunity for any provider, who is up for some honest feedback. Word of advice, @jsg can be brutally honest and might hurt your feelings, but if you take it as it is and listen to it, it can and will help your business :). Just be prepared ;)

    Uhm, a benchmark software is supposed to tell hard and objective facts. As for myself, I try to always be polite (even friendly) but I hate lies, so I'll always say what I have to say - but I try to put it in a non-hurting way.

    Thanked by 1seriesn
  • I'll never get the point in benchmarking VPS since it is shared resources in the end but kudos anyway.

  • jsgjsg Member, Resident Benchmarker

    @serv_ee said:
    I'll never get the point in benchmarking VPS since it is shared resources in the end but kudos anyway.

    Maybe your view is informed by OpenVZ and the likes. With KVM and especially with VDS the situation is much better and it does make sense to benchmark them.

  • @serv_ee said:
    I'll never get the point in benchmarking VPS since it is shared resources in the end but kudos anyway.

    Or testing on a different OS and kernel than users will use. How he's so oblivious to this, I have no idea. This is just nonsensical thinking.

    Or bypassing or disabling performance enabling features that would otherwise be present in normal use.

    jsg has absolutely no experience in QA or testing, so this is par for the course for him.

    If he ever releases v2, I'm just saying,

    1) test results won't be repeatable
    2) the results won't be useful in reflecting the real world performance

    His v1 app was just so, so broken that shitty VPS' with known resource limits would test higher than NVMe servers that were not hard limited. It was worse than useless.

  • @jsg said:
    Well, "32nd of never", as a particularly stubborn jsg hater put it a week or two ago in my interview thread, has arrived (and the wannabe enemy, as usual, is wrong).

    vpsbench 2.x is completed and ready to rock

    and will be made publicly available in the coming days.

    sigh you can't even dunk over my head properly. You can't say "32nd of never... has arrived... and will be publicly available in the coming days". That's not what "arrived" means.

    Motherfucker, why not just release it now and then you'd be actually fucking throwing it in my face? Jesus Christ.

    Are you waiting for the 32nd of August? Because I have some bad news...

    Yabs script went out for public testing and made many changes and fixes before it was good and stable. Jsg doesn't do that, so it'll be a bug ridden shit show that he'll defend before fixing actual bugs.
    He goes right into "production", testing commercial services like he's doing them a favour.

    Anyone that is ok with being guinea pig for unproven test app gets what's coming to them.

  • @jsg said:

    @serv_ee said:
    I'll never get the point in benchmarking VPS since it is shared resources in the end but kudos anyway.

    Maybe your view is informed by OpenVZ and the likes. With KVM and especially with VDS the situation is much better and it does make sense to benchmark them.

    It still only makes sense if the resources are dedicated to a user. If its "fair share" between users the benches will vary from one end to another depending what other users on the same node are doing.

  • @TimboJones said:

    @jsg said:
    Well, "32nd of never", as a particularly stubborn jsg hater put it a week or two ago in my interview thread, has arrived (and the wannabe enemy, as usual, is wrong).

    vpsbench 2.x is completed and ready to rock

    and will be made publicly available in the coming days.

    sigh you can't even dunk over my head properly. You can't say "32nd of never... has arrived... and will be publicly available in the coming days". That's not what "arrived" means.

    Motherfucker, why not just release it now and then you'd be actually fucking throwing it in my face? Jesus Christ.

    Are you waiting for the 32nd of August? Because I have some bad news...

    Yabs script went out for public testing and made many changes and fixes before it was good and stable. Jsg doesn't do that, so it'll be a bug ridden shit show that he'll defend before fixing actual bugs.
    He goes right into "production", testing commercial services like he's doing them a favour.

    Anyone that is ok with being guinea pig for unproven test app gets what's coming to them.

    You really do step out of bed on the wrong side each morning don’t you? F*cking asshole.

    @jsg Really nice benchmarking tool. Some improvements can always be done, but so far, looks promising. I like that you can also give custom parameters to it and doesn’t just benchmark everything at once, you can actually choose what to measure.

    Cheers for this one mate.

    Thanked by 1jsg
  • I skimmed through hoping to see a link to a VCS repo.
    But, not so much as a page with version labelled tarballs?

    You can do better with ver 2.0 release.
    More transparency is a good thing.

    Else, no one is going to run an untrusted/unsigned code blob in an esoteric language.

    Thanked by 2jsg TimboJones
  • jsgjsg Member, Resident Benchmarker
    edited August 2020

    @serv_ee said:
    It still only makes sense if the resources are dedicated to a user. If its "fair share" between users the benches will vary from one end to another depending what other users on the same node are doing.

    I get your point and I'm not saying that you are completely off, but if one does many benchmarks at different times of the day and over multiple days as I virtually always do in my benchmarks, one gets a pretty good impression of the tested VPS/VDS.
    Also note that I always mentioned the "spread" in my reviews that is, how close (or far) the results (of the single benchmarks) stay to the average. Good quality VPS (and of course VDS) on a good, not oversold node will show clearly better consistency (less spread) than poor ones.
    Btw, I was reminded during my testing my algorithms and code with quite a couple of systems here that even one's local system will show a spread of about 5% - 10%.

    @FoxelVox said:
    You really do step out of bed on the wrong side each morning don’t you? F*cking asshole.

    @jsg Really nice benchmarking tool. Some improvements can always be done, but so far, looks promising. I like that you can also give custom parameters to it and doesn’t just benchmark everything at once, you can actually choose what to measure.

    Cheers for this one mate.

    Thank you very much! Not even for myself but because for our community, because it's a really bad state of health when a clueless (in the relevant field) and known to be aggressive, destructive, and vulgar sourpuss is tolerated to take a dump all over the place.
    I wouldn't mention it but as you'll probably understand it: I basically rewrote my benchmark software for LET. I even wrote my own http client because I wanted some useful features that the usual libraries don't offer and I wrote my own ping because I didn't want to call an external program for that; I wanted to stick to a single (quite small, too) executable and have dependencies, not even to standard OS tools. Short, I made a lot of effort for our community to help us to make better informed decisions - and the reward is mostly a personal attack from, as you so aptly called him, a "f*cking asshole" and some whining about not using git[whatever], not (anymore) being open source, etc.

    Wrong approach with me. People who try the "we won't thank you, we won't value all the time and work and knowledge invested for us, but we will simply insult you, attack you, complain, and push you for more and to bow to our cool mantras" approach will end up with empty hands.

    So, thank you very much! Some basic manners and a bit of positive feedback is most welcome and good for our community.

    @vimalware said:
    I skimmed through hoping to see a link to a VCS repo.
    But, not so much as a page with version labelled tarballs?

    You can do better with ver 2.0 release.
    More transparency is a good thing.

    Else, no one is going to run an untrusted/unsigned code blob in an esoteric language.

    Well, then they don't use it. Fine with me.

    I wrote the VPS benchmark tool originally for myself but then decided to share it because others might have a need for it. Next I gave in to some pushing for the source code and mainly got criticism for simply putting the tar.gz'd source online rather than to put it on github. Fun fact: Meanwhile github is considered as uncool by many and I'd probably get criticized for not using gitlab ...
    Downloads were few anyway but the reviews and benchmarks were quite well liked. The reasons are simple: (a) too many mindless people who are more about virtue signalling (e.g. "one can't trust closed source code") and (b) most (incl. myself in many areas) prefer to simply consume.

    The reality is different. (a) Neither open nor closed source code can be blindly trusted; there is hard evidence for that. (b) So how come that, to name just two examples, billions of people use Microsoft Windows and Apple products? Why don't they use linux? Btw., do you have the circuit diagrams, gerber files, etc. for at least your computer's main board and your disks? Did you verify them? Why not?

    I am using a private local repo in my daily work and I won't change to github, gitlab, or another system I do not like and I do not need just because the "cool" crowd likes that. And btw. and a propos "trust" I actually did use a language with a very good track record, with strong static typing and for which static and dynamic verification can be done - rather than python, perl, or javascript. And of course I did extensively test my code - unlike 99+% of all open source projects.

    Frankly, I don't think that you are in a position to tell me what's good and right in my field of work - and you didn't. All you did was showing that you are a "cool" guy who recites the "cool" mantras. So, again: you don't like to use software that isn't on git[whatever] and open source? Then simply don't use it.

    Thanked by 1FoxelVox
  • vimalwarevimalware Member
    edited August 2020

    Putting my reply above fold for readers' sake

    So much strawman-ing here

    Dude, I don't use github either.

    How would one be able to verify what version of code they are downloading without so much as a checksum?

    A set of versioned tarballs with published checksums a la tarsnap is fine.

    @jsg said:

    @vimalware said:
    I skimmed through hoping to see a link to a VCS repo.
    But, not so much as a page with version labelled tarballs?

    You can do better with ver 2.0 release.
    More transparency is a good thing.

    Else, no one is going to run an untrusted/unsigned code blob in an esoteric language.

    Well, then they don't use it. Fine with me.

    I wrote the VPS benchmark tool originally for myself but then decided to share it because others might have a need for it. Next I gave in to some pushing for the source code and mainly got criticism for simply putting the tar.gz'd source online rather than to put it on github. Fun fact: Meanwhile github is considered as uncool by many and I'd probably get criticized for not using gitlab ...
    Downloads were few anyway but the reviews and benchmarks were quite well liked. The reasons are simple: (a) too many mindless people who are more about virtue signalling (e.g. "one can't trust closed source code") and (b) most (incl. myself in many areas) prefer to simply consume.

    The reality is different. (a) Neither open nor closed source code can be blindly trusted; there is hard evidence for that. (b) So how come that, to name just two examples, billions of people use Microsoft Windows and Apple products? Why don't they use linux? Btw., do you have the circuit diagrams, gerber files, etc. for at least your computer's main board and your disks? Did you verify them? Why not?

    I am using a private local repo in my daily work and I won't change to github, gitlab, or another system I do not like and I do not need just because the "cool" crowd likes that. And btw. and a propos "trust" I actually did use a language with a very good track record, with strong static typing and for which static and dynamic verification can be done - rather than python, perl, or javascript. And of course I did extensively test my code - unlike 99+% of all open source projects.

    Frankly, I don't think that you are in a position to tell me what's good and right in my field of work - and you didn't. All you did was showing that you are a "cool" guy who recites the "cool" mantras. So, again: you don't like to use software that isn't on git[whatever] and open source? Then simply don't use it.

  • jsgjsg Member, Resident Benchmarker

    @vimalware said:
    Putting my reply above fold for readers' sake

    So much strawman-ing here

    Dude, I don't use github either.

    How would one be able to verify what version of code they are downloading without so much as a checksum?

    A set of versioned tarballs with published checksums a la tarsnap is fine.

    No problem, checksums for tarballs, or in this case executables you can get. Plus even a short manual and clear versioning.

    Thanked by 1vimalware
  • serv_eeserv_ee Member
    edited August 2020

    @jsg I get that. You can run it day in and day out but its not that black and white. What are you going to compare it against? Really only thing you can compare is connection. Without knowing ram being used (timing, latency), SSD/HDD/SHDD? Its all meaningless pretty much to compare hosts as I am pretty sure most hosts dont give that info out.

    Maybe a little hazy post but hopefully you understand where Im trying to get to with it.

    Thats why PC world benchmarking is fair and square. You got everything out there pitted against each other. In the VPS world, not even close to that. (Well not really fair if you take cinebench into account but thats a whole another story)

  • jsgjsg Member, Resident Benchmarker

    @serv_ee said:
    @jsg I get that. You can run it day in and day out but its not that black and white. What are you going to compare it against? Really only thing you can compare is connection. Without knowing ram being used (timing, latency), SSD/HDD/SHDD? Its all meaningless pretty much to compare hosts as I am pretty sure most hosts dont give that info out.

    Maybe a little hazy post but hopefully you understand where Im trying to get to with it.

    Thats why PC world benchmarking is fair and square. You got everything out there pitted against each other. In the VPS world, not even close to that. (Well not really fair if you take cinebench into account but thats a whole another story)

    • what's the alternative? Rolling a dice and hoping?
    • You are right, one can not get a consistent and precise benchmark result on any shared systems but again, so one can't even on one's own local system or on a dedi. What one can however get is a pretty good impression of a VPS/VDS.
    • One can see, for example, whether one is on a sh_tty oversold node or on a decent node, whether the processor, the memory, the disk are sh_tty old crap from 2005 (I have seen those) or decent ones. Plus and most importantly one can get a reasonable impression of how one's own given workload will perform.

    Again, you are right, benchmarking a shared system is not an accurate science, but it's much, much better than rolling a dice. And again, the spread of the result values, even over just 3 days can tell a lot about a VPS.

  • @serv_ee said:
    @jsg I get that. You can run it day in and day out but its not that black and white. What are you going to compare it against? Really only thing you can compare is connection. Without knowing ram being used (timing, latency), SSD/HDD/SHDD? Its all meaningless pretty much to compare hosts as I am pretty sure most hosts dont give that info out.

    You don't need to know the specs to measure how much work it does in a specific amount of time. As long as you can correctly measure that, you can compare results effectively.

    I just know his v1 app didn't measure correctly and results were nonsensical. Probably from thinking a millisecond timer on a shared host was accurate over a very short period, IIRC.

  • @FoxelVox said:

    @TimboJones said:

    @jsg said:
    Well, "32nd of never", as a particularly stubborn jsg hater put it a week or two ago in my interview thread, has arrived (and the wannabe enemy, as usual, is wrong).

    vpsbench 2.x is completed and ready to rock

    and will be made publicly available in the coming days.

    sigh you can't even dunk over my head properly. You can't say "32nd of never... has arrived... and will be publicly available in the coming days". That's not what "arrived" means.

    Motherfucker, why not just release it now and then you'd be actually fucking throwing it in my face? Jesus Christ.

    Are you waiting for the 32nd of August? Because I have some bad news...

    Yabs script went out for public testing and made many changes and fixes before it was good and stable. Jsg doesn't do that, so it'll be a bug ridden shit show that he'll defend before fixing actual bugs.
    He goes right into "production", testing commercial services like he's doing them a favour.

    Anyone that is ok with being guinea pig for unproven test app gets what's coming to them.

    You really do step out of bed on the wrong side each morning don’t you? F*cking asshole.

    @jsg Really nice benchmarking tool. Some improvements can always be done, but so far, looks promising. I like that you can also give custom parameters to it and doesn’t just benchmark everything at once, you can actually choose what to measure.

    Cheers for this one mate.

    And you feel like you need to step in after I responded directly to him? I say "motherfucker" and you say "fucking asshole". Oh, the irony. Why the fuck are you getting involved, anyway? Specifically, where was I wrong in what you replied to?

    You're praising an unreleased app of no proven value. You must be a gullible shit IRL. Are you a jsg shill?

  • @jsg said:
    Thank you very much! Not even for myself but because for our community, because it's a really bad state of health when a clueless (in the relevant field) and known to be aggressive, destructive, and vulgar sourpuss is tolerated to take a dump all over the place.

    I provided so many test results on your v1 app, more than anyone else on LET combined. You only whine about personal attacks and can't deal with technical issues. I'm just your scapegoat whenever you fail from your own lack of knowledge.

    I don't dump all over the place and I always explain what my problem is with you. You just deflect and take no responsibility for anything you do. You almost can't post without throwing in a strawman as already pointed out by some else.

    We already know that you don't have a background in testing. I don't know why you keep thinking you do when you clearly don't.

    I wouldn't mention it but as you'll probably understand it: I basically rewrote my benchmark software for LET. I even wrote my own http client because I wanted some useful features that the usual libraries don't offer and I wrote my own ping because I didn't want to call an external program for that; I wanted to stick to a single (quite small, too) executable and have dependencies, not even to standard OS tools. Short, I made a lot of effort for our community to help us to make better informed decisions - and the reward is mostly a personal attack from, as you so aptly called him, a "f*cking asshole" and some whining about not using git[whatever], not (anymore) being open source, etc.

    You've reinvented the wheel to do things your way. The LET community never gave two shits about V1 app. Poisson is never going to incorporate your "libraries".

    Wrong approach with me. People who try the "we won't thank you, we won't value all the time and work and knowledge invested for us, but we will simply insult you, attack you, complain, and push you for more and to bow to our cool mantras" approach will end up with empty hands.

    "If I don't get praised enough, I'm taking my ball and going home like last time".

    @vimalware said:
    I skimmed through hoping to see a link to a VCS repo.
    But, not so much as a page with version labelled tarballs?

    You can do better with ver 2.0 release.
    More transparency is a good thing.

    Else, no one is going to run an untrusted/unsigned code blob in an esoteric language.

    Well, then they don't use it. Fine with me.

    This is the kind of hostile response to valid feedback you'll get. See how he's doing it for the "community"? /sarc

    I wrote the VPS benchmark tool originally for myself but then decided to share it because others might have a need for it. Next I gave in to some pushing for the source code and mainly got criticism for simply putting the tar.gz'd source online rather than to put it on github. Fun fact: Meanwhile github is considered as uncool by many and I'd probably get criticized for not using gitlab ...

    You got that backwards. Github is THE popular open source repo of choice, not Gitlab. Not understanding the benefits of having a public repo project with issue tracking, easy forking, change commit logs and official releases and takes it as a criticism instead of valid feedback shows how fragile his ego is.

    Downloads were few anyway but the reviews and benchmarks were quite well liked. The reasons are simple: (a) too many mindless people who are more about virtue signalling (e.g. "one can't trust closed source code") and (b) most (incl. myself in many areas) prefer to simply consume.

    You put it on Yandex, preventing easy command line automation. It's like you went out of your way to make it a hassle to get. I was likely the majority of those downloads!

    The self proclaimed "security expert" is putting down people who want to know what they run on their system. C'mon people, this guy isn't even close to being the self proclaimed security expert he claims. I mean, he regularly defends Russian and Chinese government interests, why wouldn't people want to inspect the code he's pushing? This guy is just missing common sense, let alone the proper level of paranoia for the usual "security expert".

    The reality is different. (a) Neither open nor closed source code can be blindly trusted; there is hard evidence for that. (b) So how come that, to name just two examples, billions of people use Microsoft Windows and Apple products? Why don't they use linux? Btw., do you have the circuit diagrams, gerber files, etc. for at least your computer's main board and your disks? Did you verify them? Why not?

    Strawman. You never just make a point, you have to ask irrelevant questions as if that makes the point. I think my senile mother has been listening to you.

    (B) just a few examples, because they are popular, made by large corporations that are trustworthy, functional and come preinstalled on computers they buy. Linux is super fragmented and a nightmare to manage whereas the Windows and Apple experiences are consistent and require the least amount of dynamic support.

    What did that have to do with open sourcing again? Both Microsoft and Apple have a shitload of open source repos.

    I am using a private local repo in my daily work and I won't change to github, gitlab, or another system I do not like and I do not need just because the "cool" crowd likes that. And btw. and a propos "trust" I actually did use a language with a very good track record, with strong static typing and for which static and dynamic verification can be done - rather than python, perl, or javascript. And of course I did extensively test my code - unlike 99+% of all open source projects.

    What is this shit about "cool"? This is just basic development step. And you can't make unprovable statements about 99% of open source projects. Many open source projects have built in unit tests that run each compile. I'm sure you do, too, right?

    But harping on about security and safe languages on a locally ran standalone program is pretty moot. It's just not a concern in this use case.

    Frankly, I don't think that you are in a position to tell me what's good and right in my field of work - and you didn't. All you did was showing that you are a "cool" guy who recites the "cool" mantras. So, again: you don't like to use software that isn't on git[whatever] and open source? Then simply don't use it.

    "My field of work" is a joke when you say it. I've worked with much, much smarter people than you. The really smart ones do not need to use excessive word drivel to articulate their points. Other really smart people who talk more than you are able to present the information in a logical and structured way where they make an assertion and then back it up. You're thinking you're in their class, but you really, really aren't.

    Thanked by 2vimalware doghouch
  • @TimboJones said: excessive word drivel to articulate his points

    Jesus, can you imagine this asshole on meth?

    Thanked by 1jsg
  • @PHDan said:

    @TimboJones said: excessive word drivel to articulate his points

    Jesus, can you imagine this asshole on meth?

    An improvement.

    Thanked by 1paco
  • @TimboJones said: An improvement.

    I was talking about you McFly.

    Thanked by 2jsg paco
  • jsgjsg Member, Resident Benchmarker
    edited August 2020

    UPDATE & Example

    While Mr Knows-everything-better (but is in fact utterly clueless) was again taking a dump in this thread I worked on the benchmark software.

    Originally I just wanted to improve formatting slightly but then I found a problem that already plagued me in version 1.x: Some (mainly romanian it seems) providers do provide a 100MB test file that actually is 100 Megabytes as opposed to 100 Mebibytes which is too small.
    Thanks to the http client code I created myself (in v1.x I used a standard lib) I came across that problem because v.2x shows an error code and made some small enhancements to my http client code. The solution for those 100 MB test files that actually are 10^6 rather than 2^20 Bytes large is to simply use a 1 GB test file. That works because my code downloads 96 MB ("Mebibytes", which is what most people take 'MB' to mean) only anyway, so no need to worry about wasting traffic volume.

    So, I'm happy to announce that the current version is 2.0.2 which also includes a change to make (most) linux disk benchmarking a lot better. FreeBSD still provides more honest results but the linux results are kind of OK too now.

    Now to the example (on linux, yay!):

    // sudo ./vpsb -vscdn -p=/tmp -q
    Version 2.0.2, (c) 2018+ jsg/moerm (->lowendtalk.com)
    Machine: amd64, Arch.: x86_64, Model: AMD Ryzen 7 1700 Eight-Core Processor
    OS, version: Linux 4.19.0, Mem.: 11.746 GB
    CPU - Cores: 6, Family/Model/Stepping: 23/1/1
    Cache: 32K/64K L1d/L1i, 512K L2, 16M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh mmx fxsr sse sse2 htt sse3 pclmulqdq ssse3 cx16 sse4_1
              sse4_2 popcnt aes xsave osxsave avx rdrnd hypervisor
    Ext. Flags: syscall nx mmxext fxsr_opt rdtscp lm lahf_lm cmp_legacy cr8_legacy
              lzcnt sse4a misalignsse 3dnowprefetch
    
    [PM-SC]: 285.04 MB/s    (testSize: 2.0 GB)
    [PM-MC]:   1.00 GB/s    (testSize: 24.0 GB)
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq: 457.23 MB/s
    [D] Wr Rnd: 724.29 MB/s
    [D] Rd Seq: 5167.05 MB/s
    [D] Rd Rnd: 4466.92 MB/s
    [N] speedtest.lon02.softlayer.com UK LON:, P: 35.8 ms WP: 35.8 ms, DL: 6.18 MB/s
    [N] 10.10.10.12                   XX loc:, P: 0.8 ms WP: 1.3 ms, DL: 91.06 MB/s
    [N] speedtest.fra02.softlayer.com DE FRA:, P: 28.0 ms WP: 28.7 ms, DL: 6.19 MB/s
    [N] wwwnl1-ls9.a2hosting.com      NL AMS:, P: 19.6 ms WP: 22.3 ms, DL: 6.20 MB/s
    Error [Network] Host 'lg-ro.vps2day.com': File size (100000000) too small (< 100663296)
    [N] lg-ro.vps2day.com             RO BUC: http status: -7, P: 44.9 ms WP: 44.9 ms, DL: 0.00 MB/s
    [N] speedtest.gwhost.com          RO BUC:, P: 50.1 ms WP: 89.9 ms, DL: 5.98 MB/s
    [N] 185.183.99.8                  RO BUC:, P: 46.0 ms WP: 46.0 ms, DL: 6.17 MB/s
    

    Notes:

    • That system is a linux VM (virtualbox) running on a linux box. The disk is a nice but not high-end NVMe (IIRC a Samsung 860 Evo). Note that the disk is tested in sync/direct mode.
    • Note the 2 lines starting with 'Error [Network] Host'. I left that host intentionally in my network testing targets file (where you can have your own choice of target servers) because it's one of those romanian servers whose 100 MB test file is actually not 100 MB (MeBibytes), so the download test is aborted (and an error is printed).
    • Note that the http client prints an error code (in that case '-7') in case the http response code is not 200 and everything worked fine).
    • Note that both the (normal) ping and the new "Web ping" introduced in v2.x are shown anyway as they are independent of the download speed benchmark part.
    • Note in the line starting with '[N] speedtest.gwhost.com' the ping is 50.1 ms but the Web ping ("WP") is much more, 89.9 ms. That's a good example for my reasons to design the web ping and to build it into vpsb. Those 2 numbers clearly show (a) that that host is not particularly fast (the one in the line above is 5 ms faster), and (b) that their test file is virtually worthless because their http server is seriously slow.
    • (the 10.10.10.12 system is a local test box)

      // sudo ./vpsb -d -p=/tmp -q -b
      [D] Total size per test = 2048.00 MB, Mode: Buf'd
      [D] Wr Seq: 1492.77 MB/s
      [D] Wr Rnd: 2795.94 MB/s
      [D] Rd Seq: 5014.00 MB/s
      [D] Rd Rnd: 3186.68 MB/s

    Same system but this time the '-b' (buffered mode) is used and the results are obviously much higher.
    To avoid misunderstandings, let me quickly explain the logic:
    Buffered mode is what's usually used and what most of your applications (e.g. web server) will use. In a way it tells you the speed your applications can expect.
    Sync/direct mode is what I'm mainly interested in (but you maybe not). It tells you something about the hardware.
    Typical example: I've seen plenty SSDs, spindles, and even NVMEs with quite similiar performance in buffered mode because, as long as you don't go quite extreme the OS (especially linux) will try to make all of them seem fast. As long as your VM and the applications are within the lite to middle range that's fine. When reviewing a provider I however also want to see what material they use, what kind of disks, and that's where sync/direct mode is important.

    Now the same game, same system, but with an SSD (a good one but not high-end), first buffered mode then sync/direct mode:

    // sudo ./vpsb -d -p=/data/tmp -q
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq: 507.40 MB/s
    [D] Wr Rnd: 674.99 MB/s
    [D] Rd Seq: 4978.61 MB/s
    [D] Rd Rnd: 3335.23 MB/s
    
    // sudo ./vpsb -d -p=/data/tmp -q -b
    [D] Total size per test = 2048.00 MB, Mode: Buf'd
    [D] Wr Seq: 1361.09 MB/s
    [D] Wr Rnd: 2853.56 MB/s
    [D] Rd Seq: 5282.43 MB/s
    [D] Rd Rnd: 3300.89 MB/s
    

    As you see in buffered mode both seem to be quite similar but in sync/direct mode the differences become visible, and with spindles even much more so but unfortunately I don't have a spindle in that system. I'll do my best to very soon (tomorrow probably) present some more tests, incl. some with spindles.

    Thanked by 1vimalware
  • @PHDan said:

    @TimboJones said: An improvement.

    I was talking about you McFly.

    @PHDan said:

    @TimboJones said: An improvement.

    I was talking about you McFly.

    lol. I'll add "excessive", "drivel", and "articulate" to the list of words you don't understand.

  • @jsg said:

    UPDATE & Example

    While Mr Knows-everything-better (but is in fact utterly clueless) was again taking a dump in this thread I worked on the benchmark software.

    Originally I just wanted to improve formatting slightly but then I found a problem that already plagued me in version 1.x: Some (mainly romanian it seems) providers do provide a 100MB test file that actually is 100 Megabytes as opposed to 100 Mebibytes which is too small.
    Thanks to the http client code I created myself (in v1.x I used a standard lib) I came across that problem because v.2x shows an error code and made some small enhancements to my http client code. The solution for those 100 MB test files that actually are 10^6 rather than 2^20 Bytes large is to simply use a 1 GB test file. That works because my code downloads 96 MB ("Mebibytes", which is what most people take 'MB' to mean) only anyway, so no need to worry about wasting traffic volume.

    So, I'm happy to announce that the current version is 2.0.2 which also includes a change to make (most) linux disk benchmarking a lot better. FreeBSD still provides more honest results but the linux results are kind of OK too now.

    Now to the example (on linux, yay!):

    // sudo ./vpsb -vscdn -p=/tmp -q
    Version 2.0.2, (c) 2018+ jsg/moerm (->lowendtalk.com)
    Machine: amd64, Arch.: x86_64, Model: AMD Ryzen 7 1700 Eight-Core Processor
    OS, version: Linux 4.19.0, Mem.: 11.746 GB
    CPU - Cores: 6, Family/Model/Stepping: 23/1/1
    Cache: 32K/64K L1d/L1i, 512K L2, 16M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh mmx fxsr sse sse2 htt sse3 pclmulqdq ssse3 cx16 sse4_1
              sse4_2 popcnt aes xsave osxsave avx rdrnd hypervisor
    Ext. Flags: syscall nx mmxext fxsr_opt rdtscp lm lahf_lm cmp_legacy cr8_legacy
              lzcnt sse4a misalignsse 3dnowprefetch
    
    [PM-SC]: 285.04 MB/s  (testSize: 2.0 GB)
    [PM-MC]:   1.00 GB/s  (testSize: 24.0 GB)
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq: 457.23 MB/s
    [D] Wr Rnd: 724.29 MB/s
    [D] Rd Seq: 5167.05 MB/s
    [D] Rd Rnd: 4466.92 MB/s
    [N] speedtest.lon02.softlayer.com UK LON:, P: 35.8 ms WP: 35.8 ms, DL: 6.18 MB/s
    [N] 10.10.10.12                   XX loc:, P: 0.8 ms WP: 1.3 ms, DL: 91.06 MB/s
    [N] speedtest.fra02.softlayer.com DE FRA:, P: 28.0 ms WP: 28.7 ms, DL: 6.19 MB/s
    [N] wwwnl1-ls9.a2hosting.com      NL AMS:, P: 19.6 ms WP: 22.3 ms, DL: 6.20 MB/s
    Error [Network] Host 'lg-ro.vps2day.com': File size (100000000) too small (< 100663296)
    [N] lg-ro.vps2day.com             RO BUC: http status: -7, P: 44.9 ms WP: 44.9 ms, DL: 0.00 MB/s
    [N] speedtest.gwhost.com          RO BUC:, P: 50.1 ms WP: 89.9 ms, DL: 5.98 MB/s
    [N] 185.183.99.8                  RO BUC:, P: 46.0 ms WP: 46.0 ms, DL: 6.17 MB/s
    

    Notes:

    • That system is a linux VM (virtualbox) running on a linux box. The disk is a nice but not high-end NVMe (IIRC a Samsung 860 Evo). Note that the disk is tested in sync/direct mode.
    • Note the 2 lines starting with 'Error [Network] Host'. I left that host intentionally in my network testing targets file (where you can have your own choice of target servers) because it's one of those romanian servers whose 100 MB test file is actually not 100 MB (MeBibytes), so the download test is aborted (and an error is printed).
    • Note that the http client prints an error code (in that case '-7') in case the http response code is not 200 and everything worked fine).
    • Note that both the (normal) ping and the new "Web ping" introduced in v2.x are shown anyway as they are independent of the download speed benchmark part.
    • Note in the line starting with '[N] speedtest.gwhost.com' the ping is 50.1 ms but the Web ping ("WP") is much more, 89.9 ms. That's a good example for my reasons to design the web ping and to build it into vpsb. Those 2 numbers clearly show (a) that that host is not particularly fast (the one in the line above is 5 ms faster), and (b) that their test file is virtually worthless because their http server is seriously slow.
    • (the 10.10.10.12 system is a local test box)

      // sudo ./vpsb -d -p=/tmp -q -b
      [D] Total size per test = 2048.00 MB, Mode: Buf'd
      [D] Wr Seq: 1492.77 MB/s
      [D] Wr Rnd: 2795.94 MB/s
      [D] Rd Seq: 5014.00 MB/s
      [D] Rd Rnd: 3186.68 MB/s

    Same system but this time the '-b' (buffered mode) is used and the results are obviously much higher.
    To avoid misunderstandings, let me quickly explain the logic:
    Buffered mode is what's usually used and what most of your applications (e.g. web server) will use. In a way it tells you the speed your applications can expect.
    Sync/direct mode is what I'm mainly interested in (but you maybe not). It tells you something about the hardware.
    Typical example: I've seen plenty SSDs, spindles, and even NVMEs with quite similiar performance in buffered mode because, as long as you don't go quite extreme the OS (especially linux) will try to make all of them seem fast. As long as your VM and the applications are within the lite to middle range that's fine. When reviewing a provider I however also want to see what material they use, what kind of disks, and that's where sync/direct mode is important.

    Now the same game, same system, but with an SSD (a good one but not high-end), first buffered mode then sync/direct mode:

    // sudo ./vpsb -d -p=/data/tmp -q
    [D] Total size per test = 2048.00 MB, Mode: Sync
    [D] Wr Seq: 507.40 MB/s
    [D] Wr Rnd: 674.99 MB/s
    [D] Rd Seq: 4978.61 MB/s
    [D] Rd Rnd: 3335.23 MB/s
    
    // sudo ./vpsb -d -p=/data/tmp -q -b
    [D] Total size per test = 2048.00 MB, Mode: Buf'd
    

    **> [D] Wr Seq: 1361.09 MB/s

    [D] Wr Rnd: 2853.56 MB/s**
    [D] Rd Seq: 5282.43 MB/s
    [D] Rd Rnd: 3300.89 MB/s
     
    

    As you see in buffered mode both seem to be quite similar but in sync/direct mode the differences become visible, and with spindles even much more so but unfortunately I don't have a spindle in that system. I'll do my best to very soon (tomorrow probably) present some more tests, incl. some with spindles.

    You are using ”an" SSD, which must be PCIe 4.0 to be over 3500MB/s. What model is that? And for how long might one expect 5282MB/s performance, since this benchmark "tells you the speed your applications can expect."?

    Also, why the huge spread between random and sequential writes? Sequential reading or writing is LESS commands, work and context switching (I.e. more efficient) and so sequential is expected to be larger than random, not half (!).

    No personal attack, legit data review question.

  • jsgjsg Member, Resident Benchmarker

    Seq. vs. Rnd - Explanation:

    No secrets there. Both, random and sequential write out x rounds (can be set via command line) of y bytes of data each (can be set via command line too). Hence 'total size per test' which is shown in the results is simply x rounds times y size. In the above results print out it's 2 GB (which is the default).
    The data written out (and later read back) is purely random, created by a good quality PRNG. The time needed to create those data is not counted against the benchmark timing.

    The difference is this: sequential write simply writes out the data sequentially while random writes them out in x rounds, where each is written to a randomly chosen sector (within the total test file size, 2GB in the example above).

    As most modern OSs do relatively complex caching on multiple levels (plus potentially the disk controller caching too) random writing can be, and usually is, faster because within say 128 sectors there is a high probability that some of those 128 sectors are close to each other and hence are within the same cache segment and hence are written out together.

    It is highly likely a wrong assumption to think in terms of "seq.writing is simpler and should be faster" because that's not how it works. The OS "thinks" in terms of locality and sees only chunks of data to be written out.

    The reason I (create and) use random data is simply to make it harder for the OS to be "smart" about caching. This also makes sense for vpsbench users because in real use cases (as opposed to benchmarking) you also tend to have different data and usually don't write out the same data again and again.

    Btw. context switching plays next to no role at all. And also, No, sequential vs random writing is not necessarily less or more commands because file positioning calls (required in random mode) nowadays usually do not go to the device but rather are treated as "hints" by the OS ("which sectors will need to be written?").

    Whatever the details, in the end the result numbers simply show how fast data are written to or read from disk. If those numbers seem to be unrealistically high it's due to caching (both reads and writes) by the OS.

  • No.

    Caching cannot get more than 550MB/s read or written to your SSD. This is a physical limitation of the bus.

    You seem to give caching way too much credit all the time.

  • jsgjsg Member, Resident Benchmarker
    edited August 2020

    @TimboJones said:
    No.

    Caching cannot get more than 550MB/s read or written to your SSD. This is a physical limitation of the bus.

    You seem to give caching way too much credit all the time.

    Yes and No. You are right wrt the Sata hw. But you are wrong anyway because modern OSs don't return from a write call when the data have been written to the device; they return when they have done [something], where "something" can be anything between "marked a buffer in memory as to be written out" and "started to write buffer out to device" or even "actually wrote the data to disk" - and even then that could mean that the data just have been written to yet another cache like that of a Raid controller. Plus, often the disk itself has a cache.

    Keep in mind that most systems have plenty free memory because only a part of it is used and modern OSs use that for caching (linux being particularly aggressive). Also keep in mind that the vps benchmark default test size (all data read or written) is just 2 GB. Plus, I have some small pause between rounds. That pause can be configured, incl. to 0, but it's a major ingredient in a VPS test because I don't want to dominate the node but rather be a polite neighbour. Plus it's realistic because the vast majority of user applications also don't read/write large chunks without pause; in fact vpsbench, even with the pause between rounds is stressing the node more than 80% of users would.

    Result: vpsbench does give you reasonable numbers. If vpsbench achieves e.g. 800 MB/s disk writing then a customer buying that VPS will also see 800 MB/s disk writing with most applications.

    Also note that the parameters I chose for benchmarking are (based on my experience) a good and realistic config, but vpsbench is very configurable, so if someone has a use case outside of the common frame (s)he can simply adapt vpsbench via the command line.

  • @jsg said:

    @TimboJones said:
    No.

    Caching cannot get more than 550MB/s read or written to your SSD. This is a physical limitation of the bus.

    You seem to give caching way too much credit all the time.

    Yes and No. You are right wrt the Sata hw. But you are wrong anyway because modern OSs don't return from a write call when the data have been written to the device;

    ???. Then you're not measuring the storage speed and you're doing it wrong.

    You can have a big ass water storage system (caching), but the amount of water that can transferred from it is limited by the hose connected to it. Sata 6 being the hose in this example. Having a storage system the size of a lake won't make the water transferred from the small ass hose 10X faster. coughignore water pressurecough

    In other words, if your 1Gbps NIC said it was doing 5Gbps, you'd flat out call shenanigans, no? This is the same fucking thing.

    To continue the network analogy, it's like you're doing that thing similar to iperf where the sender bursts a bunch of data at the start and use that number instead of the destination's stats of actually received data.

Sign In or Register to comment.