jsg, the "Server Review King": can you trust him? Is YABS misleading you?
@jsg has been doing a number of reviews with his own benchmarking tool "vpsbench" and was recently given the title of "Server Review King" to honor his contribution. He was also quoted by Contabo on their homepage, containing a positive statement of their NVMe VPSes.
Having said that, the conclusions on some of his threads are particularly egregious and didn't match up with the tests performed by other members. This was followed by claims from jsg such as YABS, the script used by most on LET to benchmark VPSes, was wrong.
This prompted me and another forum member to take a deeper look at vpsbench, which should hopefully put some questions to rest: Can you trust vpsbench and jsg's reviews in general to accurately benchmark systems? Is YABS inaccurate and has @MasonR been misleading Lowenders on this forum and elsewhere?
In the interest of transparency and to help other people reproduce what I did, I'll go a bit into what I used to test YABS and vpsbench. I used the latest versions of YABS and vpsbench available:
As for the testing VPS, I used a AWS EC2 instance with c5.large instance with 10GB gp3 disks which are limited to 3000 IOPS or 125 MB/s, whichever is higher.
Shocked? Isn't this LowEndTalk, where I'd get much more value by using a cheap VPS picked at BF instead of the pricey enterprise-y BS that is AWS?
I agree, but AWS allows you to have guarantees of specific levels of performance that are clearly documented. c5.* instances are dedicated-core instances, and similarly, gp3 instances with those settings exactly guarantee those performance characteristics. So, this choice eliminates noisy neighbours and any sudden drops in performance due to the provider throttling CPU or IOPS.
Also, none of the EC2 instances (apart from those named *.metal) support nested virtualization, which means I can also eliminate configuration differences between nodes. With typical LET providers, you can sometimes have inconsistently configured nodes.
vpsbench runs on both FreeBSD and Linux, while YABS is Linux-only. I'll use FreeBSD 12.2-RELEASE-p7 (ami-04d776585c8aa9c80 in us-east-1) and CentOS 8.4.2105 (ami-04d776585c8aa9c80 in us-east-1). AMIs are AWS's term for disk images, and if you're following along at home you can use those disk images to make sure they're the official ones and I didn't get into some shenanigans to unfairly treat one benchmark tool over the other.
First, let's put vpsbench to the test on FreeBSD:
I immediately see an issue with the "Std. Flags" results, which suggest that nested virtualization is allowed -- as mentioned earlier, this is just not allowed with AWS. Looking at the processor flags shows that AWS doesn't enable nested virtualization on the node:
Moving on to the disk tests, the "Rd. Rnd" value almost seems about right and is close to the advertised 125 MB/s, but the "Wr. Seq"/"Wr. Rnd" seem a bit low for something that's supposed to have consistent performance. What is interesting though, is the "Rd Rnd", which is just this sky-high value of > 4000 MB/s and wouldn't represent any real disk performance. It's more likely caching or another similar optimization that's at play here.
Even after running these tests multiple times, I see similar values to the above tests and don't see any reflection of the advertised disk performance. Now AWS could be lying, but this would mean that they're lying to Fortune 500 companies -- companies with a lot of money to sue AWS for false advertising if this were the case.
So before I can reasonably make that allegation, I'll switch over to Linux to see if things are any better over there:
The "Wr. Seq" and "Wr. Rnd" on Linux are close to the advertised disk performance, which is good. But, the "Rd. Rnd" and "Rd. Seq" are way off, at 7000 and 8000 MB/s! I just don't have a reasonable explanation for these values.
Let's now compare it with YABS. I used the
-gi flags since I don't discuss CPU and network performance.
These tests are very representative of what's advertised by AWS. Nested virtualization is correctly detected as being disabled, and are the disk IOPS limits perfectly detected. The 4k test caps out at 3000 IOPS, and the 1m test shows the limit on IOPS as being 129.5 MB/s that is even closer to the 125 MB/s limit, compared to what vpsbench reports on Linux.
Again, this could be a fluke, so I tried running YABS many times over, and there is no significant deviation in these values.
At this time, you can't trust vpsbench (especially on FreeBSD) to give an accurate representation of CPU capabilities and disk performance. This, unfortunately, means that his reviews, by extension, cannot be trusted.
I generally don't like to assume ill intentions, but combined with the fact that @jsg is highly defensive and doesn't take well to criticism and often makes unverified claims about other tools, casts a negative light on how much you can trust his reviews. There are other issues with his reviews too, such as a lack of controlling characteristics -- comparing three different providers with different CPU caps and performance caps/characteristics -- which lead to make claims such as "AMD is not really light years ahead in performance [over Intel]", but that's a conversation for another day.
YABS, for all intents and purposes, by virtue of being a script that relies on other well tested software like fio, geekbench, and the Linux kernel, just plainly does a better job of representing performance and CPU capabilities accurately.