All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
VPS Benchmark Scripts
These are some of the benchmark scripts I found and have used in the past, are there any others that are better or do you recommend staying with one of these?
Usage example
$ wget -qO- bench.sh | bash
Usage example
(curl -s wget.racing/nench.sh | bash; curl -s wget.racing/nench.sh | bash) 2>&1 | tee nench.log
(wget -qO- wget.racing/nench.sh | bash; wget -qO- wget.racing/nench.sh | bash) 2>&1 | tee nench.log
Usage example
**bash <(wget --no-check-certificate -O - https://raw.github.com/mgutz/vpsbench/master/vpsbench) **
Usage example
$ wget http://busylog.net/FILES2DW/busytest.sh -O - -o /dev/null | bash
Usage example
** wget https://raw.githubusercontent.com/STH-Dev/linux-bench/master/linux-bench.sh && chmod +x linux-bench.sh && ./linux-bench.sh**
Usage example
$ wget https://raw.githubusercontent.com/hidden-refuge/bench-sh-2/master/bench.sh && chmod +x bench.sh && ./bench.sh
Usage example
wget --no-check-certificate https://github.com/teddysun/across/raw/master/unixbench.sh
chmod +x unixbench.sh
./unixbench.sh
Comments
YABS - https://github.com/masonr/yet-another-bench-script
ServerScope - https://serverscope.io/
I'm kinda curious with that bench.sh — how is it possible to display a web page when viewing it in a browser, while downloading a shell script instead when the query is made from wget/curl with the exact same link?
EDIT: OK, so the website basically detects my user agent info; if I'm using a browser, it shows the webpage at https://bench.sh; if it detects something like this "User-Agent: Wget/1.13.4 (linux-gnu)," it will 302 redirect to "http://86.re/bench.sh" and download the script.
Those that use dd are not reporting useful information for disk write. Only YABS by @MasonR is most accurate for bandwidth throughput because it uses iperf but the problem is lack of public APAC servers.
iperf.cc
shoud do. (not mine).
Those are the servers that are used in @MasonR's YABS
some thoughtful work went into @jsg
's vpsbench: https://www.lowendtalk.com/discussion/144821/new-vps-specific-benchmark-update-download-avail/ is the beginning of some interesting discussion, worth digging into a bit more for some useful insights relating to specifically to VPS performanceI share his concerns and my homebrew LEBRE Xtended bench (currently three VPSes are quarterway through testing; aiming for 30 days of data) now include:
First batch of results will only be out around middle January.
Have you used it, though? It's the most inaccurate benchmark I've ever used. I've only ever seen one person ask about it and when she didn't reply and I understood no one used it and no one gave a shit, so I've just ignored it.
Run it on your shittiest servers and then wonder why it gives disk results in the GB/s (free Oracle Vm's with 50MB/s actual limits and Cloud at Cost with 6-60MB/s actual limits). Then run it on your NVMe servers and wonder why those results are less. The numbers are nonsensical.
I gave up because author is never wrong, just clumsy and doesn't give a shit about doing good work.
I have not used it - but I find the ideas intriguing, and I would like to subscribe to the newsletter!
@MasonR is prem.
For the sake of fairness and correctness:
I've done quite many benchmarks and reviews using my vpsbench, and quite a few of the benchmarks I did were upon request by the providers. I also know of multiple providers using my vpsbench internally to check server hardware and software, optimize servers, evaluate offered systems, etc.
Quite a few have publicly thanked me for my benchmarks and none doubted the results of my benchmarks.
So that's quite a few professionals whose income is influenced by my benchmarks/reviews -against- 1 guy who loves to trash talk me and my work.
I wouldn't worry about people who have no problems assuming they must be correct even if they lack the ability to understand. Asperger's a real thing.
I made a point of saying your benchmark reports GB/s for shitty services and your response is that providers don't complain. No shit, Sherlock. You really don't understand much.
Why a provider would ever use your benchmark to check their system makes no sense to me and makes me think the provider is garbage.
What you said has no relevance to the validity of the test results. I have yet to see some one else post results from your benchmark and @poisson doesn't even use your benchmarks. In fact, he does a MUCH better job than you.
I've questioned your results as being valid as well as your commentary on those results as being inconsistent and indicative of problems nearly every time. You really don't have any validation your results are valid other than "people use it and thank me". Yet, you're in the security field? Haha, haha, haha.
@TimboJones
Did it ever come to your mind that you might be the one who fails to understand? Of course not.
Plus you are lying. My results are quite consistently lower ("worse") than those from other benchmarks and that's because I want realistic numbers rather than optimal ones.
This is a typical case of TimboJones modus operandi. You smear others who actually contribute to the community while you yourself have not written any reviews afaik, let alone ones based on real world numbers.
Your greatest trolling skill is to repeat the thing you keep doing while pretending (oblivious?) you don't.
Others? It's just you. It's not a smear, you are putting out garbage and acting like its gospel. Worst, people are actually mislead to believe you know wtf you're talking about, but you really don't. It's really, really pathetic of you, you're like Slashdot's version of the legendary asshole apk.
Realistic? He says, without providing any proof. Which specific other benchmarks would those be? You are tone deaf. I gave you ample warning to check yourself before you're humiliated further. You asked for it, asshole:
Free tier Oracle VM:
CloudatCost V3 server:
CloudatCost V1 server:
LETBox:
VMBox:
Expected excuses:
1. You test on FreeBSD 12 and these are all linux based
2. I'm somehow doing it wrong
3. Server suddenly became super busy the moment after doing vpsbenchmark
4. Because Facebook
5. Because someone was on your lawn
@TimboJones
Thank you for finally providing evidence for what I have said: you are utterly clueless and do not know what you're talking about.
Of course you didn't know that but the 'direct' flag usually does dramatically slow down IO by (trying to) disabling OS caching. Btw, vpsbench can crash even Raid controller caches while those tools are limited to (largely) disabling the OS cache.
You can verify that by simply running
dd if=/dev/zero of=/test_path/test.out bs=64k count=16k
a couple of times with and then a couple of times without theoflag=direct
argument. On my desktop (linux 64-bit) the result without the 'direct' flag is about 100% faster than with it.In other words: yabs disk results are bound to be artificially slower than mine. Simply leaving away the 'direct' flag will dramatically change the situation.
My vpsbench on the other hand does not (ab)use some other tools to do the work but it uses its own code that actually reflects what potential customers can expect from a VPS's disk. Plus vpsbench doesn't stupidly write nonsense data, or even worse, extremely well cacheable and compressible zeroes but real world data that are very hard to cache and to compress.
As for the timing both dd and ioping and vpsbench are based on the best high resolution clock available on a system, typically CLOCK_MONOTONIC.
TL;DR You are a clueless idiot who does not contribute anything himself but consistently tries to smear people who actually wrote benchmark tools, have lots of experience and know what they are doing, while you - provably - do not even understand the tools - written by others - you use.
And you call me an a__hole? Well that's probably because that is the playground you really know and live in ...
P.S. Kudos to @MasonR - that's a nice bit of shell scripting! I would suggest though to base it on sh rather than bash because bash is not available everywhere. Anyway, his tool is useful to get a first impression of a system
@jsg it seems to me fio does a better job measuring disk performance on linux. The rate of drop of iops with increasing block sizes seems like a better indicator of disk performance than dd, and fio can disable disk cache (it was at least designed with that possibility). I use ioping for latency tests. I hope my rationale for using fio and ioping on Linux is sound.
I don't want to judge that as my interest in both tools is very limited (re. benchmarking) but your assumption that fio is better because fio can disable the cache is wrong. Both dd and fio (and ioping) can use or (largely) disable the disk cache.
If at all then ioping might be better because it was meant to be a measuring tool while dd was meant to serve for other purposes but often gets (ab)used for benchmarking because it also reports it's throughput ("speed").
As for fio I personally wouldn't consider it for doing my benchmarks because just like dd fio can be used for that purpose but is (a) not really a benchmark tool, and (b) overkill. For its real purpose though fio is a fine tool.
Sidenote: both dd's and ioping's direct flag/parameter only address the OS caching which is of limited use on servers because those often have hardware Raid with its own cache. In fact that factor is one of the reasons why I decided to design and implement my own benchmark test.
I'm planning to write an enhanced and extended vpsbench 2.0 (closed source this time) but I'm willing to provide you a library with my disk test mechanics if you are interested. Of course I would also provide a description of what it does and how it works. You could then use that for whatever benchmark you are building (e.g. in Python or whatever). Just contact me if interested.
Might look into this in the future. I'm most familiar with (and all my personal/work scripts are) bash, so that's what I stuck to for this when I wrote it up. I haven't encountered any distros that don't have bash included by default (at least none that I use regularly), but I'd be interested in testing and getting this to work on one if I can identify one (maybe alpine?).
That's pretty much all it was meant for, just to get some basic info on the server that's being analyzed. I wrote it first just to do the iperf tests since every other bench I've seen does single threaded http downloads (or uses speedtest.net -- which I dislike for a few reasons). And I added in geekbench because I run that on most of my machines anyways, so it was more out of convenience.
As for the disk tests -- I agree completely that they are not the best measure and the results can vary widely under different setups, leading to misleading conclusions regarding disk performance. If I could completely leave it out, I would, but I think that it's something that everyone has come to expect to see from a benchmarking script. I pretty much just copied what nench/bench were doing with their speed tests. Was originally using dd for read/write, then thought ioping would give "truer" results for reads, but now I'm not really sure.
I wanted the script to not rely on any external dependencies to be installed on the system (which is quite limiting to what you can use for the tests). I attempted to compile fio and use that instead, but couldn't get it to work. If I can find something better for the disk tests, I'll certainly swap them out. The only requirement I have is that it won't need the end user to install (or compile) anything in order to run it.
I don't try to pretend that I know much about benchmarking a system and charting its performance. I just know what I look for and wrote it up in a script, which others might benefit from. Obviously people that want to get a better idea of server performance should be using something much more sophisticated.
Yes alpine is an example; its default shell is 'ash' which is a variant of 'sh'. The BSDs also have bash packages/ports but by default don't have it installed (but a csh or ksh variant) but the BSDs have a 'sh' installed.
Also thanks for explaining what your goal was.
Btw, if you are interested you too can get my new (yet to be done) vpsbench library (or binaries) for a potential new version of your script.
Reading your reply, I think perhaps there is an issue with the meaning of disk performance. You are right: they measure different things and users of benchmarking scripts don't understand what they are measuring. I consider fio important because many applications on servers run on databases and random read write IO performance is crucial.
Your benchmark measures other important disk mechanics (and I don't have the means to account for RAID data pollution) so your tool is very important in a server setting. If it can be compiled and run on debian, I would like to use it in addition to fio to determine true disk read write capabilities.
That would be cool! YABS has become a standard tool for me for lightweight testing on new servers and having more realistic disk IO benchmarks would be awesome.
@MasonR I just roll with homebrew script and install all the dependencies I need. It's my own testing, not for idiot-proof public consumption, so it works fine for me. Thanks for YABS though! It was the basis of my homebrew script.
No, my benchmark measures both, sequential and (truly!) random reading and writing and the new version will add some more capabilities (e.g. effective latency).
And yes, the library (or binary) of my (yet to be done) version 2 I offered will run on debian, virtually all other linux distros and FreeBSD. There will be 4 binaries/libraries, two for linux and two for FreeBSD, 32-bit (still in use sometimes) and 64 bits. All your scripts have to do is to find out on what OS they are running and if 32 or 64 bits and then use the adequate version of my lib/bin.
It's up to @MasonR to decide on that; it's his tool. In case he wants my binary (in his case) he'll get it (or more precisely the 4 relevant versions). But again, you'll have to ask him.
Wow. OK when you are done I will ditch fio and ioping. Right now I am doing extended tests of VPS disk based on these tools. If one tool gets the job done, I prefer one tool for parsimony. Will be good to document how to interpret the results. I don't have the technical chops, but writing a piece of good documentation is something I can help with.
I already do provide a small but reasonable documentation for my current version. For the new version I intend to do even better and in particular to explain the "mechanics" used. But you can probably make it a lot more user friendly; I'm not really good at that.
Fact check: You stated "My results are quite consistently lower ("worse") than those from other benchmarks and that's because I want realistic numbers rather than optimal ones."
Actual fact: You're zero for two. Your results were consistently HIGHER than those from other benchmarks and your results are no where near realistic. You were just demonstrated to be proven wrong. I think you have problems with the English language as you are constantly using words with the opposite meaning from how you use them.
The objective is to run the test and see results indicative of the user's experience. Saying shit about another benchmark and not your own extremely wrong results is weak sauce. In the end, does the result meet that expectation? In your case, its a big, fat, fail.
Fuck, you are seriously dense. Your numbers report astronomically high results that are not valid. It doesn't reflect any performance a user would experience!!! CAN YOU UNDERSTAND THIS?!?! WHAT LANGUAGE DO I NEED TO SAY THIS IN? FORGET OTHER BENCHMARKS, FIX YOUR OWN SHIT!
Again, not trying to "smear" any other person but you, and you know this but keep repeating it. Don't be dumb. I've told you before that Centminmod is the gold standard of benchmarks (and you look like a basic bitch in comparison) and @poisson does a MUCH better job than you do in every way. You don't even get the importance of comparative testing and flip flop between real world and synthetic testing.
Your benchmark is the worst I have ever seen. It's not even in the same timezone as being close to representative performance. It is VERY CLEAR I do not understand this tool - written by you - because it is nonsensical and all you can do is attack other benchmarks as being wrong but have no verification of your own.
Right, the guy who couldn't take criticism, makes claims without proof, calls me a liar and then attacks the benchmark once you're proven to be full of shit. You're never wrong, just clumsy. You were "clumsy" when you said your benchmark reports lower than all the others, right?
You are truly, truly mad (as in the apk insane kind, not in the emotional sense). You attack yabs but do not have any comment about how nonsensical your program reports for disk performance? This is Cloud at Cost, notoriously one of the most oversold and POS providers there is (they intentionally limit disk performance). And you think the problem is how yabs uses dd.... get a grip. Look inward for the problem, not outwards.
Here is a new Cloud at Cost FreeBSD benchmark:
tl;dr Your program reports performance MUCH HIGHER THAN ACTUAL AND IN NO WAY USEFUL OR REPRESENTATIVE OF USER'S EXPERIENCE. 1.676 GB/s is laughable.
Anyone who has a really crappy server can run your shitty benchmark against yabs or any other benchmark and see which one is closer to their user experience. Thing is, no one seems to give a shit about running your program.
@TimboJones
You are amusing me. You can throw as many results at me as you like, that doesn't change the fact that you are - provably and proven by yourself - utterly clueless and do not even know what you are doing and you try to compensate for that by getting ever more aggressive and vulgar.
Just read what the author of yabs himself wrote here.
And again: What have you so far contributed here - besides smearing, attacking and trying to insult users who actually contributed? Why don't you write and publish a good benchmark, Mr. AlwaysKnowsBetter?
You mean honest about what it does and it's limitations? Take a hint! I see no explanation for your nonsensical numbers, why don't you stay on topic instead of ignoring your bugs?
Please, tell me where the smear is? Who is being attacked besides someone claiming their benchmark does things better than anyone else but is pure fraud? Tell me where I'm wrong, I'm just running your benchmark and getting nothing but garbage results. You have not shown how these results are remotely valid. You have no response for why your app spits out obviously wrong numbers and that you don't see that is mind blowing. Like I said, you're just another apk as being blinded by your ego to know your shit stinks bad to everyone else but you.
Do you think you're contributing a useless benchmark is somehow better than nothing? It's worse than having nothing. Worse!
Anyone who uses your benchmark should know it is just utterly useless as is the author.
Stop whining like you're personally being attacked and just fix your nonsensical app.