All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Contabo new raw NVMe speed product line benchmark/review
Contabo / @contabo_m have created a new product line, NVMe Epyc VPS - but not with just any NVMes, nope. The NVMes in those VPSs are terrific speed daemons, about 3 times the speed of anything I've seen so far.
Before I present the results I need to mention two "BUTs", (a) The benchmark results presented here are based on pre-launch tests that is, they were largely made under optimal not real world conditions.
And (b) - and here a BIG THANK YOU goes to Contabo: Those ideal results are not what you get to see because who cares about wet dreams? As 'NVMe' is the feature of that new product line I wanted to get an see realistic results, so I asked Contabo for 5 additional VMs on the same node my german test VM was on - and they were friendly enough to comply! Imagine that ... provider "et voilà, here's your free test VM" (plus one in each of their other locations) ... me: "thanks, but I'd like 5 more and I want them to torture the living sh_t out of your product" ... them: "torture our test VM??? Sure, here you go, 5 additional VMs have been provided".
That's chutzpah! That's a provider who has confidence in their product! So again, Thanks a lot Contabo and kudos for accepting my torture challenge! (grin).
Originally my idea was to have the 5 beefy, mind you, extra VMs create the kind of load that is realistic on a node. And yes, it worked fine, but then ... a voice in my head said "boring! My benchmark program has all those nice parameters and everything needed to really torture the disk(s). Let's go amok!" ... and (blush) so I did. While the main benchmark ran the 5 other VMs, again, all on the same node were pushing (mumble) millions of single sector (4k) writes, unbuffered, sync/direct of course (after all this was about NOT playing nice) ... then more millions of 1k writes ... then tens of millions of 512 byte writes ... and then - because that damn NVMe, while getting slower of course, still refused to go on its knees .. a few (mumble, mumble) tens of millions of 256 byte writes.
Well, I can report success. I finally did manage to achieve the results I got from the 2 or 3 fastest NVMes benchmarked before, in the 200 range. The worst I could do was even lower, about 175 but that was just a glitch; I must have been mis-typing a parameter to use 256 Byte writes on the main VM too ...
The problem with that though is that it's like proudly declaring victory after breaking a family sedan by not dropping 2 but 10 thousand pounds concrete slabs on its roof - while it's driving.
Running the main benchmark in its normal mode - while 5 other VMs on the node were hammering out unbuffered writes - that is, simply simulating an occupied node in normal use ... BANG I was back in the 600 to 650 region. Polite reminder: The bloody fastest NVMes tested before those beasts occasionally (and rather rarely) crossed the 200 boundary. And trust me those already were awfully fast NVMes.
Here's the processor and system info and the processor and memory results
Machine: amd64, Arch.: amd64, Model: AMD EPYC 7282 16-Core Processor
OS, version: FreeBSD 12.2, Mem.: 15.982 GB
CPU - Cores: 6, Family/Model/Stepping: 23/49/0
Cache: 64K/64K L1d/L1i, 512K L2, 16M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 cflsh mmx fxsr sse sse2 htt sse3 pclmulqdq ssse3 fma
cx16 sse4_1 sse4_2 popcnt aes xsave osxsave avx f16c rdrnd
hypervisor
Ext. Flags: syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm lahf_lm cmp_legacy
cr8_legacy lzcnt sse4a misalignsse 3dnowprefetch osvw perfctr_core
ProcMem SC [MB/s]: avg 343.5 - min 326.0 (94.9 %), max 347.9 (101.3 %)
ProcMem MA [MB/s]: avg 957.8 - min 902.2 (94.2 %), max 1070.0 (111.7 %)
ProcMem MB [MB/s]: avg 1013.9 - min 972.5 (95.9 %), max 1090.0 (107.5 %)
Obviously the meeting in which they decided on which flags to pass through was very short and went like this: "Nothing to decide. Just pass all of them. Have a nice day everyone". AES? popcnt? hypervisor? Yes, all available.
Except for network results which by their very nature are different per location, for the processor/memory and the disk results I only show them for one location. Simple reason: give or take a percent or two all nodes in all locations are the same.
The results are decent, typical for the Epyc range and I like that 6 vCores performance is about 3 times that of single core. Decent too. The spreads are good too but for a final verdict I'll wait until I have gathered about 200 runs after the official product launch.
Now to the main attraction, the NVMe disk results. I'll be frank, what you get to see here isn't the original benchmark numbers but an artificial - but realistic - image created by calculations based on having both, the 5 "background' VMs and the test candidate hammering the disks way harder than what I consider realistic in everyday use. Call me mistrusting but that's excactly how I look at pre-lauch benchmarking. After all we are all interested in what we can expect in full production and not in "I have the whole node for myself" mode. Chances are good that what we'll see in real use after launch will be better than what I present here.
--- Disk - Buffered --- (best case)
Write seq. [MB/s]: avg 3158.01 - min 1759.32 (55.7%), max 3888.48 (123.1%)
Write rnd. [MB/s]: avg 6280.50 - min 5051.66 (80.4%), max 7677.57 (122.2%)
Read seq. [MB/s]: avg 4180.45 - min 2720.73 (65.1%), max 4997.83 (119.6%)
Read rnd. [MB/s]: avg 8104.85 - min 5702.65 (70.4%), max 9407.96 (116.1%)
(worst case, probably with some other tests running)
Write seq. [MB/s]: avg 1965.07 - min 501.16 (25.5%), max 3766.08 (191.7%)
Write rnd. [MB/s]: avg 6015.16 - min 1269.46 (21.1%), max 6946.28 (115.5%)
Read seq. [MB/s]: avg 3710.79 - min 1591.32 (42.9%), max 4728.05 (127.4%)
Read rnd. [MB/s]: avg 7451.19 - min 5755.20 (77.2%), max 9326.81 (125.2%)
Frankly, I put those results here mainly for completeness. Boring, of bloody course NVMes in buffered mode on a modern Unix OS show nice results even when driven hard, in particular on a VM with plenty memory.
Here's the really interesting stuff:
First to underline my last point, the buffered mode results - but with the 5 "background VMs" hammering the NVMe while the main test is run:
[D] Total size per test = 2048.00 MB, Mode: Buf'd
[D] Wr Seq: 2856.96 MB/s
[D] Wr Rnd: 3705.80 MB/s
[D] Rd Seq: 3067.82 MB/s
[D] Rd Rnd: 6724.96 MB/s
Now, on to the unbuffered direct/sync results, because that's where the truth comes to light.
[D] Total size per test 4096.00 MB, Mode: Sync
[D] Wr Seq: 600 MB/s
[D] Wr Rnd: 600 MB/s
[D] Rd Seq: 2400 MB/s
[D] Rd Rnd: 5500 MB/s
First note that the size is 4 GB (instead of the usual 2GB). Plus the writes are 512 bytes each (instead of the usual 4096 that is, a "sector" on modern devices). The 5 "background" VMs also hammer the disk with 512 bytes writes but with a lot more slices so as to make sure that the drive is really hammered hard while being tested.
Using the parameters I normally always use when benchmarking a drive the write sequential result goes to over 900 MB/s!
So, unless Contabo basically heavily oversells their nodes (which I assume they do not) you should experience about 3 times the speed of what you get from other providers (some of which use NVMes that too are not slow at all). And an AMD Zen 2 server processor help too, of course, e.g. by providing plenty of PCIe lanes and fast ones too.
Drive TLDR; This is the kind of VM you definitely want for databases, heavily dynamic web sites, and the like!
Now, on to the network results, Seattle location first
US LAX lax.download.datapacket.com [F: 0]
DL [Mb/s]: avg 177.8 - min 162.8 (91.5%), max 187.7 (105.5%)
Ping [ms]: avg 33.4 - min 32.9 (98.4%), max 34.2 (102.2%)
Web ping [ms]: avg 34.3 - min 33.0 (96.2%), max 99.3 (289.4%)
NO OSL speedtest.osl01.softlayer.com [F: 8]
DL [Mb/s]: avg 31.2 - min 0.0 (0.0%), max 34.6 (110.9%)
Ping [ms]: avg 179.4 - min 4.2 (2.3%), max 184.2 (102.7%)
Web ping [ms]: avg 184.1 - min 4.2 (2.3%), max 192.4 (104.5%)
US SJC speedtest.sjc01.softlayer.com [F: 0]
DL [Mb/s]: avg 219.7 - min 209.1 (95.2%), max 232.1 (105.7%)
Ping [ms]: avg 23.4 - min 23.3 (99.4%), max 25.9 (110.5%)
Web ping [ms]: avg 42.3 - min 23.3 (55.1%), max 1382.4 (3267.9%)
AU MEL speedtest.c1.mel1.dediserve.com [F: 0]
DL [Mb/s]: avg 29.6 - min 27.4 (92.5%), max 34.1 (115.2%)
Ping [ms]: avg 200.6 - min 181.2 (90.3%), max 204.2 (101.8%)
Web ping [ms]: avg 203.4 - min 183.3 (90.1%), max 208.0 (102.3%)
JP TOK speedtest.tokyo2.linode.com [F: 0]
DL [Mb/s]: avg 54.0 - min 51.3 (95.0%), max 57.8 (107.0%)
Ping [ms]: avg 118.8 - min 117.4 (98.8%), max 147.9 (124.5%)
Web ping [ms]: avg 119.0 - min 117.4 (98.6%), max 150.9 (126.8%)
IT MIL speedtest.mil01.softlayer.com [F: 0]
DL [Mb/s]: avg 38.0 - min 35.5 (93.5%), max 40.6 (107.0%)
Ping [ms]: avg 158.5 - min 158.4 (99.9%), max 159.6 (100.7%)
Web ping [ms]: avg 171.0 - min 158.4 (92.6%), max 1196.6 (699.9%)
TR UNK 185.65.204.169 [F: 0]
DL [Mb/s]: avg 32.6 - min 28.7 (88.0%), max 34.4 (105.5%)
Ping [ms]: avg 199.1 - min 196.9 (98.9%), max 221.8 (111.4%)
Web ping [ms]: avg 199.5 - min 196.9 (98.7%), max 222.3 (111.4%)
FR PAR speedtest.par01.softlayer.com [F: 18]
DL [Mb/s]: avg 37.1 - min 0.0 (0.0%), max 43.4 (117.0%)
Ping [ms]: avg 142.7 - min 142.5 (99.9%), max 148.6 (104.2%)
Web ping [ms]: avg 170.5 - min 142.5 (83.6%), max 1455.7 (853.9%)
SG SGP mirror.sg.leaseweb.net [F: 0]
DL [Mb/s]: avg 32.5 - min 30.7 (94.6%), max 34.4 (106.0%)
Ping [ms]: avg 193.7 - min 193.5 (99.9%), max 211.2 (109.0%)
Web ping [ms]: avg 206.8 - min 193.5 (93.6%), max 1033.7 (499.9%)
BR SAO speedtest.sao01.softlayer.com [F: 0]
DL [Mb/s]: avg 32.8 - min 31.5 (96.0%), max 34.7 (105.7%)
Ping [ms]: avg 184.2 - min 183.9 (99.9%), max 184.5 (100.2%)
Web ping [ms]: avg 191.6 - min 184.0 (96.0%), max 947.5 (494.4%)
IN CHN speedtest.che01.softlayer.com [F: 26]
DL [Mb/s]: avg 21.9 - min 0.0 (0.0%), max 26.7 (121.8%)
Ping [ms]: avg 240.9 - min 240.1 (99.7%), max 257.9 (107.1%)
Web ping [ms]: avg 253.1 - min 240.1 (94.9%), max 1333.5 (526.9%)
GR UNK speedtest.ftp.otenet.gr [F: 56]
DL [Mb/s]: avg 24.3 - min 0.0 (0.0%), max 36.5 (149.8%)
Ping [ms]: avg 123.1 - min 0.0 (0.0%), max 188.2 (152.9%)
Web ping [ms]: avg 135.9 - min 0.0 (0.0%), max 1339.7 (986.0%)
US WDC mirror.wdc1.us.leaseweb.net [F: 0]
DL [Mb/s]: avg 79.0 - min 73.9 (93.5%), max 84.3 (106.6%)
Ping [ms]: avg 75.6 - min 75.4 (99.7%), max 87.0 (115.1%)
Web ping [ms]: avg 75.8 - min 75.4 (99.5%), max 87.0 (114.8%)
RU MOS speedtest.hostkey.ru [F: 0]
DL [Mb/s]: avg 31.6 - min 28.8 (91.1%), max 33.5 (106.1%)
Ping [ms]: avg 196.2 - min 192.9 (98.3%), max 243.0 (123.8%)
Web ping [ms]: avg 196.9 - min 193.1 (98.1%), max 244.8 (124.3%)
US DAL speedtest.dal05.softlayer.com [F: 2]
DL [Mb/s]: avg 95.4 - min 0.0 (0.0%), max 105.4 (110.5%)
Ping [ms]: avg 58.7 - min 57.3 (97.6%), max 147.1 (250.6%)
Web ping [ms]: avg 71.9 - min 57.4 (79.8%), max 1319.2 (1835.0%)
UK LON speedtest.lon02.softlayer.com [F: 4]
DL [Mb/s]: avg 42.1 - min 0.0 (0.0%), max 46.5 (110.6%)
Ping [ms]: avg 138.1 - min 137.9 (99.9%), max 138.4 (100.2%)
Web ping [ms]: avg 155.0 - min 137.9 (88.9%), max 1389.3 (896.1%)
US NYC nyc.download.datapacket.com [F: 0]
DL [Mb/s]: avg 84.7 - min 73.2 (86.5%), max 89.8 (106.0%)
Ping [ms]: avg 74.0 - min 73.0 (98.6%), max 80.8 (109.2%)
Web ping [ms]: avg 74.8 - min 73.1 (97.7%), max 99.8 (133.4%)
RO BUC 185.183.99.8 [F: 0]
DL [Mb/s]: avg 32.5 - min 30.1 (92.4%), max 35.4 (108.7%)
Ping [ms]: avg 189.9 - min 186.2 (98.0%), max 197.6 (104.0%)
Web ping [ms]: avg 192.1 - min 186.2 (97.0%), max 248.3 (129.3%)
NL AMS mirror.nl.leaseweb.net [F: 0]
DL [Mb/s]: avg 41.0 - min 38.1 (92.9%), max 42.4 (103.4%)
Ping [ms]: avg 153.6 - min 153.5 (99.9%), max 154.2 (100.4%)
Web ping [ms]: avg 154.1 - min 153.5 (99.6%), max 157.5 (102.2%)
CN HK mirror.hk.leaseweb.net [F: 0]
DL [Mb/s]: avg 43.5 - min 41.3 (94.9%), max 45.3 (104.1%)
Ping [ms]: avg 141.1 - min 140.8 (99.8%), max 180.5 (128.0%)
Web ping [ms]: avg 141.2 - min 140.8 (99.7%), max 180.8 (128.0%)
DE FRA fra.lg.core-backbone.com [F: 0]
DL [Mb/s]: avg 42.7 - min 41.5 (97.2%), max 44.2 (103.6%)
Ping [ms]: avg 145.4 - min 145.2 (99.9%), max 146.5 (100.8%)
Web ping [ms]: avg 147.1 - min 145.2 (98.7%), max 150.8 (102.5%)
170 - 220 Mb/s to California, nice. About 95 Mb/s Dallas, OK, 75 - 85 Mb/s across the whole country to the East-Coast, decent I guess. About 30 to slightly above 40 Mb/s to Europe, meee ... If at least cross-Pacific was great, but alas it isn't; 32 Mb/s to Singapore is nothing to write home about, but almost 55 Mb/s to Tokyo seems decent.
Comments
-- part 2 --
Now the St. Louis location
About 120 Mb/s to California, 185 Mb/s to Dallas, and 185 to 215 Mb/s to the East-Coast, yay, that's decent. Obviously about 10 Mb/s less to Asia but slightly over 40 Mb/s to Brazil (one of the largest IXs world wide!) that's acceptable. And the major Europe targets about or even over 50 Mb/s, nice.
If I'd live in the USA or wanted some site or service having decent connectivity to the whole country, this is the location I'd choose.
Now on to NYC (which btw for some reason had slightly better disk results than the other DCs)
If I lived in the USA and that was my primary focus, I'd pick the St. Louis DC - but I do not, I look with european eyes and my focus, besides Europe itself, is on "in which of the Contabo DCs do I get the best balance?", meaning min 50 Mb/s and preferably more like 70+ Mb/s to both the major US targets as well as to the major Europe targets. Contabo's NYC DC would be my choice.
80 to 95 Mb/s to California, ca. 130 Mb/s to Dallas, (not surprisingly) 300+ Mb/s to Washington DC - but also - 75 to 80 Mb/s to the big 3 in Europe (AMS, FRA, LON) looks quite attractive. Nice, I like those results.
-- part 3 --
Finally to the german DC
Obviously, this DC would be my personal choice; 160 ms roundtrip latency can be unnerving in an SSH connection. And yes, of course I love to see all major european targets offering 250 to even 350 Mb/s! The East Coast targets are still quite decent (70 - 80 Mb/s) but Asia is a weak point. Maybe Contabo should talk to Tata to at least have decent connectivity to Singapore, from where the rest can be decently reached.
But, well this isn't really a network review; most of us know Contabo's network and its strong and not so strong points. Nor is it a processor performance review; the Epycs, unlike the Ryzens, aren't meant for best desktop performance but for really decent server performance and for min. 95% of all actually used servers one would have a very hard time to convince me that a ca. 350 ProcMem score isn't good enough and that a Ryzen was needed.
Nope, this is about Contabo's new NVMe product line. I might be wrong but the way I see it is "it's Contabo, with its pros and a few cons, but now with frighteningly fast drives" - and that they did achieve. Oh boy, did they!
Very well done Contabo and thanks again for your patience with my hardcore torture testing your drives.
Final note: I also have a benchmark in the Singapore location running, but as that started a bit later I'll append it later along with the final real world results in about a week or two.
:%s/Epyc/EPYC/g
Test results can be tainted as you specifically asked Contabo for test servers. They could provide you servers in almost empty nodes, lift IOPS limits. Rather spend your money, complete rigorous testing and cancel with refund request and reason behind it. And by the way, this would be a good indication of providers billing practices.
You don't see Michelin inspector come to restaurant and introduce him-self
P.S. Yes, because of your test results I'am getting Contabo VPS.
The health inspector of Contabo VPS's!
Not "could provide you servers in almost empty nodes" - they did provide servers on almost empty nodes. Simple reason: as I clearly said, this review is based on pre-launch testing (another review round, based on production testing, will follow in one or two weeks).
As for the network I don't expect significant changes between pre-launch and production; after all we're talking about DCs that are already filled with thousands of nodes. Wrt processor I do expect a slight decrease in production that is, on fully occupied nodes.
But for the major factor, the NVMe drives I don't expect negative surprises because in that regard my benchmark did absolutely not run on an empty node but on one that had 5 beefy VMs hammer the NVMe really hard.
If (not specifically) you want anonymous benchmarks then you'll have to sponsor me because I can't be expected to do all the work and then to even pay for the VMs.
That said, I'm quite confident that Contabo didn't cheat. For different reasons, one of them being that in the long run they'd f_ck themselves, another and more important one being that (a) they are not stupid, and (b) their offering test VMs to me is not or only to a very limited degree for marketing but rather for their own purposes, roughly comparable to a "pre-flight test".
It's probably a bad thing there isn't limits per VM to limit abuse and keep things more fair. If there were limits, doing whatever on 5 other VM's wouldn't impact the server under test and you'd have your upper performance limit. Instead, it'll be affected by the biggest abusers.
Contabo is the champion of overselling.
Your NVMe wont help much when you have 50% to 75% steal and CPU is the bottleneck.
I am not saying Contabo is bad, I used them for many months without any downtime, their network was alright, SSD speeds were acceptable, but I have always had > 50% steal in every time whether it was Germany or St Louis or new Seattle location. Even moving my VPS didn't help. If you care about CPU performance look elsewhere, or thier VDS line.
Checking out contabo in other tab.
@jsg .. is this the review caused you cursing Cloudf*** shit ?
You are probably right but as a benchmarker it's not my job to discuss provider politics (nor could I successfully).
Any tangible credible evidence? Don't get me wrong but @itsnotv saying so isn't good enough, especially when I do not see that on my (private, paid for) VPSs.
Yes. CF made it basically impossible for me to put this review up. I finally succeeded only thanks to @jbiloh helping me.
Great review. Would you like to include your benchmarking script so we can use it in other providers' servers to compare?
Thank you, but: it's not a script but a compiled program (freeware but not open source).
Great contribution to the community, thanks @jsg
Yeah, contabo. Totally not overselling the CPU.
I suggest you learn about the difference between VPS and VDS.
Quick and short explanation: 'steal' tells about the OS's view which is based on an OS vCPU which is a HWT. With a VDS that is what you get, a full vCore as seen by the OS.
A VPS's vCore however is a shared HWT, typically 25% or, if you are lucky 33% of a HWT (and even less than 25% whith some products/providers).
top and similar utilities report from the OS's perspective, not from the VPS perspective, or, to be more precise, they assume that a vCore is a full vCore/HWT. VPS however are "fair share" that is, a provider packs e.g. 3 VPS vCores on a HWT and then talks about something like "2 vCPU, fair share 33%" - and if on such a VPS 'steal' shows even 60% that is normal and to be expected.
You don't like that, you want a full HWT for yourself? No problem, just buy a VDS.
i suggest you read what "st" in top means. cant understand what "fair share" means? No problem. He can.
edit: oops , sorry for shitting in your thread. will take it outside next time.
Contabo has been amazing since their expansion to the STL region (<15 ms latency to my home). What I like most is their fast and free snapshot for the whole VPS - extremely handy.
Regardless, if one ssh's into their VPS and top reports those numbers, the server is shit and not performing well.
Out of 20+ VM's, the ones with 2, 5, and 7 for those numbers are noticeably shittier than less resource VPS'. It's like, "shit, why is this so slow?" checks top "ah, ok".
Also, why even talk about VDS when that isn't the product being reviewed? (You probably should summarize the DUT at the start). Anyway, it's irrelevant since nobody is talking about needing to use 100% 24/7, people just want a responsive server, not a laggy one.
He no longer makes it available to the public because I (and maybe others) question the validity of the results and releasing it publicly would allow others to prove or disprove his results and the reproducibility.
You sure that means server is shit?
Imagine how long it took to log into that server and start top.
Fair share, not used cores for bursting, yeah right
If it takes more than 2 seconds for the bash prompt after sshing in, a keyboard is getting destroyed.
What a pile of BS and lies!
The core algorithms are still the same as in v. 1 for which I made the source code available. Result of all the "we need the source code!!!" virtue signalling? Less than 3 source code downloads and zero feedback, positive or negative.
I didn't mention the source code and the topic was providing your script/program, not the source code. I don't know what the fuck you're responding to. It's like you're deliberately responding to something else, entirely.
You didn't even prove I was lying by providing the program used for this review.
I still have the v1 programs if someone wants to try it on their VPS and see random numbers. It's just a waste of time. Anyone can search your old threads and see my posts and screenshots from the app.
(Also, you 100% got feedback. Lots of negative from me, and another user asked you about the code specifically).
Not only is the binary available but I in fact even created a Windows version for a LET user who asked for it.
And no, I did not get feedback re the source code, zero, none. I got feedback re the benchmarks/reviews I did with my software, some of it, yours, utterly clueless but consistently salty, occasionally negative from some others and mostly positive.
Being at it, let me make it clear: I had and have multiple providers, small, mid-size, and large, expressly asking me to benchmark their products, sometimes even just for their own internal use. Also, some authors of other benchmarks (scripts in their case) have publicly declared to be interested in me providing a library, e.g. for disk testing, to them, because, as at least one of them honestly said, he doesn't really know much about it. Plus I have been invited to be LET/LEBs official benchmarker and have done plenty benchmarks as LET benchmarker.
To make it clear, I'm absolutely ready to stop doing and publishing my benchmarks and reviews, if that is what LET wants, if they prefer your clueless trash talking and bashing over what I have done and am doing for LET/LEB.
In fact, I'm tagging @jbiloh and @raindog308 who also happen to know about certain agreements I happened to honour and you happened to break - again.
That's true. I didn't think I was starting anything new, just restating the past, but I'll bow out now.
No.
Taking a big dump on and fervently bashing me and my work again and then quickly "bowing out" doesn't cut it anymore because you have clearly demonstrated that you don't stick to agreements, nor do you treat a hand reached out friendly to you decently.
Sorry, this time I want a clear and binding solution, because with you generosity, patience and even friendliness just lead to yet another dump attack sooner or later. As you just demonstrated again.
Time out guys. I for one appreciate what @jsg has done thus far, it is a very positive service. There is no need to continue your disagreements as it's obvious that you don't agree with each other. I'm following this thread, and it has derailed into a personal feud. Enough.
Things have only just begun
Lol,
TL;DR: somebody PMS-ing..
Can someone confirm that nested virtualization (svm cpu flag on AMD) is indeed enabled on the production systems? @jsg mentioned the
hypervisor
flag is enabled on his test systems, but I’m not sure this is the same as svm and vmx flags?Contabo support told me nested virtualization is not possible. Maybe they are misinformed?