New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
All from Contabo's Duesseldorf DC
Ca. 16.5 ms
Ca. 14.7 ms
Ca. 24 ms
You are welcome.
They're still in the stone age as far as support goes
Haha, yes, kind of. But otoh I'll certainly not complain about a provider not running on WHMCS. Also I think one needs to see their mindset. I guess it's something along the lines of "Well, the way we do things was damn good enough to become one of Germany's largest providers".
The good news is that they are on their way to modernize some things and to fulfill the wishes of their customers. But ... oh well ... that takes time, as things with a security aspect always do, especially in large companies (iirc they are over 100 people).
Can someone confirm that nested virtualization (svm cpu flag on AMD) is indeed enabled on the production systems? @jsg mentioned the hypervisor flag is enabled on his test systems, but I’m not sure this is the same as svm and vmx flags?
Contabo support told me nested virtualization is not possible. Maybe they are misinformed?
nope it's turned off. Tho you may get it accidentally enabled like me, but it's rare.
No, it's not the same. The hypervisor flag indicates whether a system is a VM - which of course all VPSs are, but but all provider play honest; some do not show that the VPS is a VM.
svm (AMD) and vmx (intel) otoh show whether your VM can run VMs on it.
So, the summary of this is that the reviewer was able to test without any kind of restrictions imposed on the servers and everyone else that buys will get slower results?
Something like that, you could try to buy it too and then compare whether your result is similar with posted result or not.
.
.
So what is wrong with hypervisor.. it seems your comments contradict each other..
Also why blame working script when you are unable to define the difference between CPU flags that support virtualization with hypervisor flag
IMHO, we can not have a valid comparation unless we use the same process to produce the result. So far, AFAIK, the only result we can compare was using yabs and nench as @dnnr did
I have ordered one. Ill do tests when I receive it.
I would hav> @laoban said:
Which will be the tests that I will run. Unless another script can be provided.
Well my yabs test is pretty much the same as @dnnr
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
Yet-Another-Bench-Script
v2021-06-05
https://github.com/masonr/yet-another-bench-script
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
Sun 22 Aug 2021 09:44:49 PM CEST
Basic System Information:
Processor : AMD EPYC 7282 16-Core Processor
CPU cores : 6 @ 2794.750 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ❌ Disabled
RAM : 15.6 GiB
Swap : 2.0 GiB
Disk : 97.9 GiB
fio Disk Speed Tests (Mixed R/W 50/50):
Nench
nench.sh v2019.07.20 -- https://git.io/nench.sh
benchmark timestamp: 2021-08-22 19:51:07 UTC
Processor: AMD EPYC 7282 16-Core Processor
CPU cores: 6
Frequency: 2794.750 MHz
RAM: 15Gi
Swap: 2.0Gi
Kernel: Linux 5.4.0-81-generic x86_64
Disks:
sda 100G SSD
CPU: SHA256-hashing 500 MB
4.734 seconds
CPU: bzip2-compressing 500 MB
4.792 seconds
CPU: AES-encrypting 500 MB
1.086 seconds
ioping: seek rate
min/avg/max/mdev = 38.5 us / 61.1 us / 11.4 ms / 53.8 us
ioping: sequential read speed
generated 33.3 k requests in 5.00 s, 8.12 GiB, 6.65 k iops, 1.62 GiB/s
dd: sequential write speed
1st run: 953.67 MiB/s
2nd run: 1335.14 MiB/s
3rd run: 1430.51 MiB/s
average: 1239.78 MiB/s
This is from a server in Germany.
Sorry, but I would have to conclude that the server used for testing by @jsg has been provided without any restriction and therefore not what customers will be getting in the slightest.
Can you also post "lscpu"? Its built in, just type it into terminal.
@jsg mr. Server Review King, Contabo did exactly what I told you. There's another evidence. 1:1 same IOPS limit.
It is not about your server being on empty node, they just unlocked your server and you gave them sales in return for testing servers for them. Great deal. I'm not blaming you, because Contabo did in on purpose and you lost time to benchmark something that doesn't exist for consumers. Flags are also different, they don't allow virt. Scripts are not wrong and even their support confirmed that.
Company wants to earn money and they will do anything to do it, especially when they play with low margins where every saving matters. Some providers wont do RAID, some providers will put your RAM on SSD (SSDNodes, I was stupid to believe them, but refund went fine) or terminate your hungry apps.
Small premium providers like NexusBytes can provide good service, because they don't have such low margins so they don't need to pull any dirty tricks to survive.
Hetzner strategy is to invest only when its safe. They wont expand their servers to entire world if they would need to cut corners. That's why we can't really expect US/Asia servers from them in near future. Netcup is the same, only Germany and focus on long term customers. Other providers like Vultr expanded rather quickly, but they cost more than what we mentioned before. Contabo is expanding very fast with bottom pricing and their technique is to upsell.
And please, dont tell me "everyone is upselling". Upselling is technique to optimize costs and there's nothing wrong in that as long as you dont have like 40% steal. Its like you would buy ticket for plane, they would put you in bus and you would think that its normal, because ticket was cheap. Both will get you into same place, but in different time and you expected completely different speed and comfort.
5.8K IOPS hard limit on NVMe is a pure joke. Contabo on their page is like "look at our new NVMe drives, 1milion IOPS with single drive!!!". Providers with basic consumer grade SSD drives outperform that. That's free-tier class of performance. Sequentials are different story, but its hard to imagine when even 5GB/s would be limitation with 4 cores with major steal, if that werent truth then you wont hit those speeds, as other people would also use that crazy amount of bandwidth. Single, good PCI-E 4.0 drive is just 7GB/s. Of course they raid them, but how much customers they could get onto single box? Not that many. And if they would put not that many then why limit IOPS to 5.8K when single drive does 1KK? They fish with one meaningful number that nobody needs, that's the truth.
I dont think you wanted to do any harm or bad things, you just were unaware what long-standing company can do. Remember that not everyone is like seriesn. Not everyone tells truth.
The thing is... if Contabo didnt say that this server is 1:1 the same as product for consumers you cant really blame their either. They will just say that they wanted to test how their node is performing and production differs from dev. I wont defend them tho, as their communication is unclear and consumers get different product compared to what they expected.
I dont trust Contabo because of my past with them and this time I was right. They didn't change.
Have a great day/night and I hope this situation wont get you angry and you will continue to do what you are comfortable with. Just look out for people/companies that want to use your authority.
US East
IF that was true - and I still doubt it - then that review would be the last one I did for Contabo.
Let me quote what I just recently wrote in a PM to another provider:
THAT is my attitude and my basis and I'm clearly communicating that I want to benchmark systems exactly as they are sold to customers.
Any provider cheating me - and hence all of us - will pay a price and have a new enemy.
Another evidence of 5.8K hard IOPS limit and no virt enabled.
Can you post result of "lscpu" command please?
Hard to believe, but now I'm sure thats the case. There's no way every yabs result is wrong and also support is wrong. If someone posts lscpu where there are less flags than on your server then there is no excuse. lscpu is official linux tool, if it is different then its different indeed.
Just for comparison those are results from "SSD" thing that should have iops limit increased (but it seems they didn't do it and/or after migration they 'forgot' to enable it back. Quality support )
So to recap.
@jsg posts a review for Contabo. Anyone with a little common sense or if you know Contabo would immediately know there is no way those numbers would be right in the sense that is what you as a customer would see.
He can't believe Contabo would 'prepare' a test server without limits knowing it is used for a review. Proceeds to rubbish other benchmarks as wrong. Which they are not.
Server Review King...
I mean Contabo were a bit sketchy in the past but they are actually quite decent. Even if Oakley Capital have a stake.
No svm flag. No virtualization.
@jsg there's no way even lscpu is wrong. Too much evidence. Contabo clearly unlocked your VPS.
Maybe @0xbkt would run your benchmark script if you give him it if you want to be 1000% sure.
I knew there will be IO limit in moment they announced it, but I hoped it will be at least 8K IOPS, but its in Oracle Free Tier territory lol
I think in future you should do it in different way. Instead of getting VPS from provider ask them if they will refund instantly your order after tests. Order VPS, test and get your money back. It will be still free for you and it greatly decreases chance of this kind of situation in future (they would need to unlock all VPSes or doxx you). I don't know if you're comfortable with reviewing VPSes at all after this situation tho, just my 5 cents if you want to continue your work.
1.3K is normal for SSD line. They didnt unlock shit or they capped you again after some time.
They dont want to get serious customers with big databases or game servers and that's main reason why they limit IO so much. They prefer to have customers with backups or websites, as it doesnt require that much performance thus they can oversell CPU like crazy and people wont notice. On game server you notice small lag instantly, on web server +200ms for 5 minutes is nothing, you wont even notice it as everything is cached.
Also no DDoS Protection = no game servers xD
First I need and want to apologize to @MasonR because I said something wrong about his script, when I hinted that his 'does the system support running VMs?" flag was false. It is not, it is correct, I verified it in his source. I simply got confused myself (and was tired), although I do know the difference and in fact explained it correctly in another thread.
So: The yabs 'VM-x/AMD-V' info is correct. Again, apologies to MasonR.
.
I think I've found the answer to the puzzle: I hate to say it (because I actually always liked and valued MasonR) but at least the disk tests in yabs are basically worthless. Let me explain why:
His test is based on fio with 4 tests, each for 4k, 64k, 512k, and 1M block size (which is perfectly fine). Each of those "sub tests" ...
Put together his way of testing disk performance is gravely lacking in precision and granularity and way too much based on the processor and memory performance. Testing one and the same drive yabs' disk test will show one set of results on say a 1 Vcore, 512 MB VM and quite different results on say a 4 vCore, 4GB VM, simply due to the fact that the latter does have the memory and the vCores to run 4 threads and 64 "IO units".
Plus, the numbers have little value because they are all in unbuffered mode and mixed read and write and represent something like a completely mangled up "result number" which only formally differentiates between reads and writes and which strongly depends on the type of VM (number of vCores and memory). And to top it off you only get to see random reads and writes.
TL;DR yabs' disk tests provides a crud rule of thumb guesstimate at best. The very same disk tested on another system will show sometimes very different results.
If you still have testing VPS - could you test yours with Yabs? Not because thats perfect benchmark, but if all people have limited IOPS to 5.8K at 4K block size but your result will be higher then there's no way that's just variation or anything like that.
Or just post some command for testing I/O which is good in your opinion and then we can compare
@jsg quite a lot to unpack and on mobile… but for a start, what makes you think YABS is doing buffered I/O? It clearly passes —direct=1 to fio for unbuffered I/O.
Oops, sorry, you are right. I somehow wrongly remembered it has --direct=0. Apologies.
Edit: Corrected in my above post.
I can't because FreeBSD doesn't really support fio. I did something however which I could do: I ran fio using exactly the yabs parameters on 2 local VirtualBox VMS, a large one (8 vCores, 16 GB) and a "small" one (1 vCore, 1,5 GB), with the same up to date MXlinux (with XFCE4) and a virtual disk on the same host NVMe.
First a "funny" observation: fio couldn't be run on the small VM with 512 MB memory and even with 1 GB. I had to increase the memory to 1.5 GB in order to run fio on the "small VM because it seems that fio needs at least 512 MB free memory to run at all with the yabs parameters.
Here's the fio command as copied from the yabs code:
Note: I only ran the 4k test and while the test ran the whole system wasn't used other than running the test..
and here is the system info:
Machine: amd64, Arch.: x86_64, Model: AMD Ryzen 7 PRO 4750G with Radeon Graphics
OS, version: Linux 5.8.0, Mem.: 15.689 GB [and 1.461 GB]
CPU - Cores: 8 [and 1], Family/Model/Stepping: 23/96/1
Cache: 32K/32K L1d/L1i, 512K L2, 8M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 cflsh mmx fxsr sse sse2 htt sse3 pclmulqdq ssse3 cx16 sse4_1
sse4_2 popcnt aes xsave osxsave avx rdrnd hypervisor
Ext. Flags: syscall nx mmxext fxsr_opt rdtscp lm lahf_lm cmp_legacy cr8_legacy
lzcnt sse4a misalignsse 3dnowprefetch
Result on the large VM:
Result on the small VM:
Note that using the same disk, the same up-to-date Linux distro, and the same command the IOPS are drastically different (about 40%).
So this 24.0k is buffered etc just like yabs? 33k/24k is a lot better than what people above got.
I cant really tell anything more about fio, I'm not expert on this so you can tell us conclusion - is there a difference between your results and other people VPSes?
Yes, but this number was achieved on a modern fast Ryzen and a very fast NVMe.
fio is not a problem per se. The problem is (a) its complexity which risks to lead to being used falsely or nonsensically, (b) the way yabs uses it, which is related to (c) benchmarking seems to be easy but isn't.
I'm convinced that @MasonR meant it well with his "fire from all guns" but at the end of the day his results carry very little practical meaning.
Humans/customers are different and so are their needs. Some look for a fast database server, others look for a small "not too slow" VM, yet others look for a good "write few, read many" server - to which yabs (and others) provide no meaningful answer.
I wrote vpsbench for a good reason: I was and am a user/customer myself and I wanted a tool providing meaningful numbers to me. Relatively few programs use asynchronous disk IO, so my benchmark doesn't use it but rather uses normal disk IO. Many, probably even most, VMs sold are single vCore, at least with the LET clientele (most sub $5 VMs have a single vCore or 2 vCores at best) and it usually makes no sense to throw more (intense) threads than one has vCores, especially with VPSs which already just have shared vCores (read: a fraction of a HWT). Plus it's usually smarter to use multi-threading for network IO which has dimensionally more wait in IO than even slow spindles (vpsbench does use multithreading in the network tests).
To provide an example of how deep one must go into the matter when designing a good and actually real-world useful benchmark:
Recently a provider (who is on LET) and provides the upstream to another LET provider whom I tested seems to have felt that my network tests aren't doing him justice because he put quite some effort into finding (and then using) the best congestion control algorithm and my benchmark seemed to not reflect that.
Sounds reasonable and I'm always willing to go the extra mile, so I started a relatively big test series with the VM OS using some of the major CCAs (new reno, Vegas, cubic, and one or two even less known ones).
The result? Both Linux and FreeBSD already by default use sensible CCAs (cubic and new reno) plus the potential differences weren't that significant anyway.
How many here even know what a CCA is? Likely not many. How many know how to choose and then configure the best CCA for their situation? Highly likely even fewer. Hell, if you want to learn about them more than scratching the surface you'll end up reading academic papers (I did). Plus except for very few and rather rare situations switching to another CCA will provide very little in terms of actually getting better connectivity unless you happen to own global or at least cross continental fiber or at least wave lengths.
TL;DR The art of creating a benchmark and of benchmarking is largely the art of bringing quite extensive and solid knowledge down to a level that is easily understandable, of practical use and versatile enough to be usable for uncommon situations.
I'm not saying that my benchmark is the only true and good one, but it's something that delivers what all the scripts I know of don't deliver.
Btw, as some smirk at it: It wasn't me who coined the tag "Server Review King". @jbiloh did - and I'm absolutely sure that he had the best and friendly intentions. I myself would have chosen a more modest tag. But I'll certainly not complain about a man with friendly and good intentions putting a bit too much cream on top of the cake.