New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
YABS is just a picture of a moment, can be used as propaganda if you want too.
Run it on an empty node and profit.
Saw it a few times after the sale or a few months later, performance was lot worse.
Not always the case, but sometimes.
You would need consistent benchmark updates, but you still don't know on which node he is and if the node is loaded the same as other nodes.
Even if the provider dosen't have an interest to manipulate it, they could still do it indirectly, if he is lucky to be on a node, that has less virtual machines or/and load which results in better benchmark results.
What are we supposed to see in this thread? Fixing bug ? Someone to admit his script or his test is wrong? Relax.
Most of us just test out the performance to compare with another provider using same tool. Don't think we ever compare the tools and accept/reject the results as wrong.
No offense I usually use bench.sh, bench.monster, speed test (particular location), wget (test file download) and yabs on fresh new dedi/vps to compare between different network, provider and node but most servers went into idle mode after that.
Of course! I'm not saying that YABS is the be-all and end-all and providers could put you on empty nodes too.
I've had providers who went ahead and gave my YABSes, but when I bought their service their disk performance was so throttled that I couldn't log in via SSH. Their support would say "please log in to the server and give us a YABS" and when I did manage to login, everything would seem normal as per YABS, only for the VPS to become unresponsive and throttled again.
But, anyone relying on jsg's vpsbench is gonna get a very skewed result and you can check the Contabo thread and even my analysis for how badly it skews results. That's why I went out of my way to use AWS for this test to ensure that I could be as fair and consistent in judging both benchmark tools.
Don't go by the titles assigned to a user on LET. The addition of "Server Review King" almost makes him seem like an authority on the subject, when he's actively misleading people.
I still reckon good old bench.sh is enough for me
wait, you all bench your vps'es ?
next: RGB VPS trend
#VPSMR
there is something off with the numbers that disk bench gives in general and I doubt this is even related to caching. here is a bench from a dedi, spinning rust in soft raid1:
close to 8GB/s from sata harddisk would be a dream... but even memory won't get close to this (a cached dd with 2G of zeroes ends up being about 1GB/s on that system).
it has been pointed out before that these numbers represent something else and can't be compared to what other would define as read sequential/read random, mode sync - but I guess, that's how it is. maybe there even is an oversight in the calculation, without looking at the code, no one will no what exactly is going on there or how this number is determined.
maybe it's some sort of averaging, calculation or aggregation and it's lacking a divider or whatever.
maybe it is intended like this, but there are not much people to understand why these kind of numbers are the better ones.
Every benchmark program running on and testing a VM is subject to the fact that caching is going on (plus other factors) beneath the VM on the node. There is nothing to evade, we are talking about a simple fact.
... IF both did the same tests with the same parameters - which is rarely the case.
Misunderstanding it seems. I was talking about what is measured, not about the OS.
That's something you must ask Amazon, not me. And it doesn't change what I said.
Anyway the question what those numbers mean is justified. Does it mean that your AWS VM model does achieve "3000 IOPS or 125 MB/s, whichever is greater" minimum? Or average? Each and every second of each and every day? Or on average, and if so, over what time period? And measuring what? read or write, sequential or random, buffered or direct/sync? If you don't have answers to that then "3000 IOPS or 125 MB/s, whichever is greater" has very little value.
Also: please, already get it! You can not willy nilly compare results of different benchmarks, in particular when those benchmarks work significantly differently!
One particularly striking example is that my reviews are not based on one (1) or a few runs but usually on 100 or more runs at different times of day (not predictable) and at all times of day. In fact "benchmark xyz always delivers very similar results" is not an indicator of quality because by their very nature VPS deliver quite different results over say a day.
Plus there is much going on below the surface. One example: I went to great lengths was to really test random disk IO, meaning that both the data and the location (in the test file/on the disk) are truly random. And by truly random I mean that there is a well accepted good quality random generator at work that has passed all the usual tests (like e.g. practrand, Bigcrush, etc).
Why? Because modern operating systems aggressively cache and may simply write out the buffer again when the data to be written are the same; or they may notice that a location is a known step away from the last location, for example because a "random" IO test actually just skips a certain number of sectors between IO operations.
I actually tested that. I did exactly the same operations, one time with a fixed skip, and one time with truly random skips, and there was a noticeable difference.
Uhm, where do you see 300 IOPS?
First congrats, that's a nice processor with great single-core results!
As for the disk, which OS, what disk controller?
Plus keep in mind that dedis also pose their own set of problems because there are not dozens or even over hundred VMs sharing a few drives and reading and writing more or less all at the same time. One result of that: caching 2 GB isn't a big thing with enough free memory (which I presume your dedi has). Note that those high numbers are read results. And these (unfortunately, but mainly testing smallish VPSs I have to stay within tight space limits) read what? Right, they read what the write tests have written out before, so on a server with plenty memory chances are that the whole 2 GB still are in the cache (and hence insanely fast to read).
P.S. I think I may have found a solution for that problem that irritates me too. Give me some days; I'll try that idea and if it works will put it into the benchmark.
Your point would be right given if a disk was directly exposed to the VM. Unfortunately, this is not how a provider may choose to implement their services, and they may take a more software defined approach to disk management.
In this scenario, any I/O requests received by the underlying virtualized hardware is ratelimited to the above mentioned criteria, regardless of whether such requests are random or sequential. Probably bone headed, but that is what is proven by fio and you can get a test instance yourself if you're so inclined.
Come on, at this point you're not doing anyone any favours, buffering into the disk cache is very much an operating system level concern and either you would be measuring direct I/O or buffered I/O depending on what you're doing.
If that's your response, then this leaves only a few possibilities:
I did try to use multiple instances configured in the same way to see if it was a localized node-level issue, but the reported values always remains the same.
You can go through the fio values that I've given you earlier. It remains the same, regardless of sequential/random reads/writes.
Did you forget that I have already mentioned that I tried running your benchmark multiple times and that difference always existed.
Fixed is not truly random. It's good to see you agree.
This is with a 1MB block size, which tops out the MB/s limit and therefore you can only see ~130 MB/s and thus around 120 IOPS.
If you go down to a sufficiently small block size, such as 4k, then the IOPS becomes the bottleneck and you can see the 3000 IOPS limit come into play. At that point the 125 MB/s limit becomes insignificant.
I should have said whichever is lower instead of greater though, I spot my mistake in one of the previous statements. I've corrected it since.
Thanks @stevewatson301 for the write up and the experiment. You used a system with known values and provided all test parameters for reproduction of results.
@jsg please don't take this as an attack. user feedback is valuable and one of the best ways of improvement.
If all users send their feedback this way, it would be great. normally one would get "it is not working." (without other meaningful info).
User shouldn't guess what the numbers mean, there should be documentation on what every value means. Maybe --help would print this info or a separate page explaining this on the project's web page and some common results for reference too.
Haha. We should really compile a page with many useful uses for these idlers. I see suggestions scattered across different threads but not in one place.
Would you rather they just write their conclusion without the details on how they arrived to that conclusion? No thanks.
Please do not confuse what my benchmark does and explanations for certain results.
At the end of the day my benchmark measures what one can expect no matter the details. If one is about to buy a VPS one wants to know what ones applications can achieve, no matter the details.
fio (which is used by yabs) is a useful nice test but it was written with a purpose in mind (even laid out in the doc intro) and that purpose is not the purpose vpsbench was designed to serve. Plus yabs only does one single test type and a rather mixed one at that. But still, if someone prefers to use fio/yabs I'm certainly not in their way. Just don't expect me to use a test and methodology I do not consider adequate.
On a VM one can not control the node OS. If it does caching, and almost always it does, then that is so period and no benchmark can change that.
That said, I indeed did include the functionality to break through caches, even node caches, but doing that is not what one does in a general VM benchmark, which serves to provide a facts based impression of what to expect of a VM.
No, there is a third option - and a very relevant one: Most LET providers operate quite differently from Amazon/AWS. And, no surprise, you'll rarely find a LET provider guaranteeing x MB/s and y IOPS.
Maybe I'm looking in the wrong way but the numbers I see are far away from 3000 IOPS.
True. But it anyway is what often (incl. in some benchmarks) goes as "random access" . And anyway what's to complain about my way of doing it and using a real and quality PRNG and for both the data and the locations? Would you like it better if I did it using a sh_tty generator and only for either the data or the location like quite a few benchmarks do?
@jsg since when did you get the "Server Review King" title?
Who gave it?
Should be rather "Essay Competition Winner" or "Extempore King"
Uhm, I do welcome user feedback. In fact I already have enhanced or adapted vpsbench and my reviews based on user feedback multiple times!
What I do not welcome is bashing me or my work or occasionally even ganging up on me.
Again: I have done a lot of work for this community and certainly not because I'm masochistic. One should be able to expect to at least not being rewarded by personal attacks, stubbornly repeated lies, etc. If someone doesn't like my work (or me) they are free to simply ignore it.
It doesn't, and my tests and observations amply demonstrates so. If you want to prove this is not the case, please provide transparency into your methods by providing reproducible methods and code for how one can achieve these results.
Not sure why you would think so. Outside of containerization technology (in which case it's not really a VM), you do have control of the OS.
What does the practices of LET providers have to do with this discussion? The bottom line is your benchmark is unable to find out hardware limits, which is the very purpose of benchmarks like this. If it is unable to do so, it's completely useless.
Did you not read my explanation that it's limited to the lower of 3000 IOPS or 125 MB/s? And my explanation of how 1 MB block sizes tops out the limit of 125 MB/s leading to ~125 IOPS? If you test with 4k block sizes, you would see that too.
Or did you already read but desperately trying to ignore my statements with the hope that I'd not notice and back down and accept?
Your benchmark is already shitty, no need to use a shitty generator.
The same guy who enables and allows the RN spam to continue.
Nah, "Russian Troll" is more appropriate.
Ah!
It seems @jsg a little tired, he can do better when he have some good rest.
Why doesn't someone just strace the thing and figure out what it's doing? If not strace there are other ways to oversee the work of a binary.
I wanted to be objective in my analysis and focus on what I could test and control. As far as a strace is concerned, he's spamming the kernel with
clock_gettime
calls. Of course his smart-assery didn't end there, he then took the chance to educate me how kernels of yore worked and howclock_gettime
was used to figure out timeAs for reverse engineering, my reverse engineering skills are not that good and there are too many layers of indirection going on in their nim-written binary for me to figure out.
It would be unwise to attack him with speculation, and I wanted to just stick to what I could discover.
.
Uhum
Followed by personal attacks and insults ...
Thanks for confirming what I said:
.
You clearly don't like "that guy" but you stay at his oh so bad place ...
No further questions, thank you. My focus is on people who constructively contribute to our community, not on evil minded bashers.
This was a mistake at my end and I've said as much that it should be "lower" (read my most recent replies to you). I'll reach out to a mod to see if it can be edited.
But realistically, anyone worth their salt (and especially you) should be able to figure that this was a mistake. If "higher" was really the case, you could get infinite MB/s by increasing the block size of your operations.
Regardless, that small mistake doesn't still detract from the rest of my observations: your benchmark is unable to figure out the limits of hardware.
This is going to go nowhere as objective comparisons and constructive criticism is always met with a wall of nonsense and talking around the issue. The goalposts are constantly moved and there will never be a resolution. 🤷♂️
But the guy now gets to unravel himself and how he hides behind a wall of nonsense to protect his ego. That alone is valuable.
No, bored by ever new attacks diguised - for a while at least - as "objective review".
My benchmark does (a very cheap) call many times? Duh!
When benchmarking one needs to somehow get the time. There are two basic options, (a) time the whole thing once, or (b) do it in "slices", just like fio and others do, and find out the time used for each slice. Choosing the better route (b) of course leads to many calls, duh. Only a clueless person would call that "spamming the kernel with syscalls".
There is not even a need for reverse engineering because I openly laid out how vpsbench does its disk IO tests.
So, why did you do it anyway? And btw. it's unwise to attack anyone with speculation.
Thanks btw for finally being honest (or clumsy) enough to spill the beans and openly stating what your thread is about: it's about attacking me.
Hahahaha! Thanks for the laugh!
It was already explained to you how your call isn't cheap. But since you keep forgetting, I'll leave the link once more: https://en.wikipedia.org/wiki/Kernel_page-table_isolation. And the inaccuracy of your benchmark is out there for everyone to see.
Which isn't sufficient because of the large discrepancies that I keep seeing and that you're unwilling to address in any capacity.
I stuck to reproducible facts, you're welcome to run these tests yourself. But as I said, you keep writing essays with the hope of one-upping me, ignoring the fact that I myself said I wanted to avoid speculation.
As they say, "reading is an essential skill". Read the results, run the tests, you'll see that there's no attack, just that your benchmark is wrong.
I'm glad I could make your day better
Shows us the tool of kings. Show us jsg benchmark script source code!
You will only get PMS answers such as these:
https://www.lowendtalk.com/discussion/comment/3242037#Comment_3242037
https://www.lowendtalk.com/discussion/comment/3267793#Comment_3267793