All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Multi-Site NexusBytes Benchmark & Review
I guess, NexusBytes needs no introduction here; they are well known. Todays review looks at NexusBytes' locations in Germany and at their nodes at Utah, Los Angeles, New York, and Miami
Note that there are also ongoing benchmarks in UK and in Singapore; the reviews for those two will be available once the full-month testing is finished, about in early October.
I'll start at the end, with their support. Frankly, not much to say there. Their support is well known to be great. Usually short reaction times and usually fast problem solving as well as a very customer focused attitude is what one came to expect from them and what one gets.
So, let's get at the beef, at the data ...
FYI: All tested nodes are Ryzen, 2 GB memory, 30 GB disk.
Germany
Version 2.0.3b, (c) 2018+ jsg (->lowendtalk.com)
Machine: amd64, Arch.: amd64, Model: AMD Ryzen 7 3700X 8-Core Processor
OS, version: FreeBSD 12.0, Mem.: 1.985 GB
CPU - Cores: 2, Family/Model/Stepping: 23/113/0
Cache: 32K/32K L1d/L1i, 512K L2, 32M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
pse36 cflsh mmx fxsr sse sse2 sse3 pclmulqdq ssse3 fma cx16 sse4_1
sse4_2 popcnt aes xsave osxsave avx f16c rdrnd hypervisor
Ext. Flags: fsgsbase bmi1 avx2 smep bmi2 syscall nx mmxext fxsr_opt pdpe1gb
rdtscp lm lahf_lm cmp_legacy svm cr8_legacy lzcnt sse4a misalignsse
3dnowprefetch osvw
ProcMem SC: avg 441.2 - min 406.7, max 452.2
ProcMem MC: avg 862.0 - min 794.0, max 933.6
--- Disk - Buffered ---
Write seq.: avg 1829.13 - min 1119.05 (61.2%), max 1954.50 (106.9%)
Write rnd.: avg 8059.65 - min 6449.13 (80.0%), max 9288.91 (115.3%)
Read seq.: avg 1674.65 - min 1457.46 (87.0%), max 1802.35 (107.6%)
Read rnd.: avg 6786.44 - min 5972.53 (88.0%), max 7583.30 (111.7%)
--- Disk - Sync/Direct ---
Write seq.: avg 161.29 - min 124.34 (77.1%), max 176.58 (109.5%)
Write rnd.: avg 310.82 - min 275.27 (88.6%), max 345.73 (111.2%)
Read seq.: avg 3345.17 - min 2979.47 (89.1%), max 3716.67 (111.1%)
Read rnd.: avg 835.94 - min 742.24 (88.8%), max 905.85 (108.4%)
US LAX lax.download.datapacket.com
DL [MB/s]: avg 39.80 - min 31.19 (78.4%), max 45.96 (115.5%)
Ping: avg 151.6 - min 148.2 (97.7%), max 158.3 (104.4%)
Web ping: avg 152.7 - min 149.9 (98.2%), max 162.2 (106.2%)
NO OSL speedtest.osl01.softlayer.com
DL [MB/s]: avg 144.74 - min 120.35 (83.1%), max 156.74 (108.3%)
Ping: avg 33.3 - min 33.5 (100.7%), max 40.4 (121.4%)
Web ping: avg 133.2 - min 33.6 (25.2%), max 2623.9 (1970.6%)
US SJC speedtest.sjc01.softlayer.com
DL [MB/s]: avg 34.66 - min 4.22 (12.2%), max 39.14 (112.9%)
Ping: avg 150.2 - min 150.4 (100.1%), max 160.8 (107.1%)
Web ping: avg 166.6 - min 150.8 (90.5%), max 1452.8 (871.9%)
JP TOK speedtest.tokyo2.linode.com
DL [MB/s]: avg 23.78 - min 21.37 (89.9%), max 26.14 (109.9%)
Ping: avg 249.8 - min 230.5 (92.3%), max 279.0 (111.7%)
Web ping: avg 251.6 - min 232.9 (92.6%), max 283.0 (112.5%)
AU MEL speedtest.mel01.softlayer.com
DL [MB/s]: avg 18.46 - min 0.00 (0.0%), max 21.35 (115.6%) - (http error: -10)
Ping: avg 12920.7 - min 273.6 (2.1%), max 923086.5 (7144.2%)
Web ping: avg 12984.5 - min 273.8 (2.1%), max 923086.5 (7109.2%)
IT MIL speedtest.mil01.softlayer.com
DL [MB/s]: avg 183.99 - min 3.69 (2.0%), max 244.71 (133.0%)
Ping: avg 5660.3 - min 21.0 (0.4%), max 653015.3 (11536.8%)
Web ping: avg 5710.4 - min 21.1 (0.4%), max 653015.3 (11435.5%)
FR PAR speedtest.par01.softlayer.com
DL [MB/s]: avg 233.56 - min 198.65 (85.1%), max 248.93 (106.6%)
Ping: avg 21.7 - min 21.1 (97.1%), max 27.2 (125.1%)
Web ping: avg 62.3 - min 21.3 (34.2%), max 2416.7 (3876.6%)
SG SGP mirror.sg.leaseweb.net
DL [MB/s]: avg 36.15 - min 33.15 (91.7%), max 38.87 (107.5%)
Ping: avg 159.1 - min 159.6 (100.3%), max 175.5 (110.3%)
Web ping: avg 159.2 - min 159.6 (100.2%), max 178.4 (112.1%)
BR SAO speedtest.sao01.softlayer.com
DL [MB/s]: avg 27.06 - min 13.51 (49.9%), max 29.36 (108.5%)
Ping: avg 193.2 - min 191.5 (99.1%), max 199.1 (103.0%)
Web ping: avg 221.4 - min 191.8 (86.6%), max 2246.2 (1014.4%)
IN CHN speedtest.che01.softlayer.com
DL [MB/s]: avg 29.65 - min 18.15 (61.2%), max 35.40 (119.4%)
Ping: avg 185.9 - min 155.7 (83.8%), max 320.4 (172.4%)
Web ping: avg 199.2 - min 157.2 (78.9%), max 1204.5 (604.6%)
GR UNK speedtest.ftp.otenet.gr
DL [MB/s]: avg 101.40 - min 0.00 (0.0%), max 127.16 (125.4%) - (http error: -10)
Ping: avg 8275.3 - min 42.1 (0.5%), max 954924.2 (11539.5%)
Web ping: avg 8282.5 - min 42.1 (0.5%), max 954924.2 (11529.4%)
US WDC mirror.wdc1.us.leaseweb.net
DL [MB/s]: avg 53.50 - min 43.17 (80.7%), max 63.18 (118.1%)
Ping: avg 95.6 - min 97.2 (101.7%), max 97.6 (102.1%)
Web ping: avg 97.4 - min 97.2 (99.8%), max 119.4 (122.6%)
DE FRA speedtest.fra02.softlayer.com
DL [MB/s]: avg 328.30 - min 4.91 (1.5%), max 447.75 (136.4%)
Ping: avg 27.8 - min 11.6 (41.7%), max 114.8 (413.0%)
Web ping: avg 87.1 - min 11.9 (13.7%), max 1892.2 (2172.5%)
RU MOS speedtest.hostkey.ru
DL [MB/s]: avg 142.49 - min 58.49 (41.0%), max 163.75 (114.9%)
Ping: avg 41.4 - min 39.8 (96.1%), max 109.6 (264.7%)
Web ping: avg 42.2 - min 39.8 (94.3%), max 109.6 (259.7%)
US DAL speedtest.dal05.softlayer.com
DL [MB/s]: avg 43.56 - min 5.54 (12.7%), max 49.34 (113.3%)
Ping: avg 122.7 - min 121.8 (99.3%), max 127.1 (103.6%)
Web ping: avg 139.0 - min 121.8 (87.6%), max 1270.5 (914.1%)
UK LON speedtest.lon02.softlayer.com
DL [MB/s]: avg 300.23 - min 223.91 (74.6%), max 330.15 (110.0%)
Ping: avg 17.7 - min 17.0 (96.2%), max 23.3 (131.8%)
Web ping: avg 83.5 - min 17.2 (20.6%), max 1607.6 (1925.1%)
US NYC nyc.download.datapacket.com
DL [MB/s]: avg 61.25 - min 33.40 (54.5%), max 77.82 (127.1%)
Ping: avg 87.6 - min 80.3 (91.7%), max 172.2 (196.6%)
Web ping: avg 91.3 - min 80.9 (88.6%), max 338.7 (371.0%)
RO BUC 185.183.99.8
DL [MB/s]: avg 163.15 - min 95.63 (58.6%), max 211.40 (129.6%)
Ping: avg 30.5 - min 28.8 (94.4%), max 36.5 (119.7%)
Web ping: avg 33.9 - min 28.9 (85.2%), max 117.4 (346.1%)
What we see here is a typical higher end AMD Zen system. Maybe worth mentioning: AES, Hypervisor, Popcnt, all the nice flags are there. Thumbs up!
Not surprisingly we also see in the numbers what we expect from a modern Ryzen. About 450 single core and 862 multi core is a very decent result. The spread that is, the difference between min and avg and avg and max is very decent for a KVM node. All in all the node is in the top class wrt processor and memory.
Before we go to the disk results let me first show you the results of a local Ryzen (1700, in my lab) with a not high end but decent Samsung 860 Evo SDD (not in a KVM but in a Virtualbox VM but that shouldn't be much difference wrt disks). By far more influential are the hosts caches which, keep that in mind, are transparent to the VM.
Buff'd
WS 1101.56
WR 3334.45
RS 3254.95
RR 5182,29
Sync/Direct
WS 93.49
WR 187.90
RS 3149.60
RR 318.36
So, a decent SSD that according to the usual available data (incl. quite a few "benchmarks") is good for about 500 MB/s actually does about 300 MB/s on my host in sync/direct mode but just about 100 MB in a VM. Now, obviously my host is not at all configured for serving as a hosting node, so we can't expect that those results can be transferred 1/1 to a presumably optimized for that job hosting node but still: do not trust in manufacturer data or in quite some benchmarks who, sorry to spill the beans, try to make products look good (or else they might loose an advertiser ~ income ...).
And again, what's the difference anyway between buffered and sync/direct mode for you as a user? Crudely summarizing one might say that most applications use buffered mode and only relatively few applications use sync/direct mode; many databases are a prominent example. Why? Because data consistency is of high importance to them and one price to be paid for that is disk speed. Side note: always be sure to have a reasonable set of indices as well as good and plenty caching on a DB system. A DB can manage its own caches better than the OS which is why a properly configured DB offers far better speed than my sync/direct mode results suggest and in fact close to or even better than the OS buffered speed.
Now back to NexusBytes ...
To cut it short, I'm not enchanted by what I see there. They are OK, those disks, not at all lousy and much better than e.g. the SSD in my lab system, but no matter how one turns it, they are not great. The good news is that @seriesn and myself discussed about that and I have reason to believe that NexusBytes is working on pimping up their approach to disks. Also, their spread is quite decent.
As for the network, sorry I'm not going to write a book (which it would become with all those network tests ...), so look at the numbers yourself. I'll rather provide a small guide how to interpret them.
The percent number after min and max are in relation to avg, What you want is a low percent value after the max. Simple reason: a low max percent value means that the average performance is quite close to the max performance. In contrast for min you want high percent values because that indicates that you can expect a consistently decent speed.
The ping is just what you know as a ping, but the web ping is a different animal. It tells you how fast the target http server reacted. If the web ping is more than a bit higher, say 10% or so, or if it shows a high spread or an avg closer to min than to max, you might want to look for another test target.
But web ping has yet another use: in the context of download speed it helps you to see who's the culprit with a slow speed, because e.g. a fast web ping but a slow download speed strongly hints at bad connectivity, because the web ping also indicates how quickly the server responds (but not how quickly the data are transferred).
Sometimes you see a 'http error: X' message (with X usually -10) which is yet another hint that there is a problem with downloading the data. Note however that the problem maybe on either side or anywhere in between.
All in all a relatively decent connectivity although there are some hiccups and some of them are really ugly (e.g. AU MEL).
Comments
USA, Utah
On this VM the processor is weaker wrt single core performance than the german VM bur multi core is just as good. Maybe there is an abuser plus probably the node is full. But still any single core result above 300 is really good and the multi core performance is very, very good considering that we are talking about only 2 cores here.
The disk is a bit worse than on the german node too but still considerably faster than my 860 Evo with just a single user, so I won't complain.
The network is interesting for me as a European because I'm used to see much better results than what seems to be normal for the USA. Also, if a VPS here in Europe would offer just about 30 Mb/s I'd turn away. Come on, LON, FRA, PAR and even most US cities below 100Mb/s, strange. On the other hand look at the LAX result! Yay, that's what I want to see. Also note the very consistent quality (really low spread). Also the speeds to East Asia and even to ozzyland are quite nice.
USA, Los Angeles
Yay, this is probably the node I chose if I lived in the USA and wanted reasonable to good network connectivity. The fact that this node is faster to almost all targets than the one in Utah is a bit strange but hey one the internet has its own geography. Whatever, I like what I see.
Also note that the processor and memory results as well as the disk result is better too. Nice.
USA, New York
Don't get me wrong, guys, it's not that I hate to write reviews, it's just that with a multi location review of NexusBytes nodes it tends to get boring because, psssh, it seems that @seriesn has some kind of machine that creates nice nodes. All Ryzen, all very decent to great processor performance, all with the same disks it seems and mostly varying only wrt network. Well, the NYC node is just another example and the one I'd chose if I lived in the USA and were interested in good connectivity to Europe.
Amazing. (y)
Waiting review for SG
USA, Miami
Remember what I just said about NexusBytes seeming to have a machine that generates really nice nodes? Well, this is yet another example. But I seem to notice a pattern: NB has a great node and another node in the West (USA); well, they seem to have the same in the East. A really nice node in New York and another node in Miami whose results trail a bit.
Now, considering NB's pricing I certainly won't complain, quite the contrary, I can understand that many here like NB a lot (I myself have one of their VPSs). It's hard to find more processing bang for the buck. And frankly, yeah we all like high numbers but the reality is that only very, very few really need more than even just 10 Mb/s. So, one triples that number (we want to be on the safe side, right?) and NexusBytes offers that to pretty much everywhere on the globe.
If I were asked (and I happen to know that @seriesn always has open ears for feedback) I'd tell him something like this:
Reputation is immensely valuable! And yours is to provide almost insane bang for the buck and to provide really nice VPSs. Be sure to stick to it and to build it out. First in terms of high quality nodes.
I want to thank @seriesn for the opportunity - and generosity! - to provide 5 nodes for a couple of days and 2 for a full month. Thank you!
Disclosure: I have an NB VPS myself (and am happy with it). I never made a secret out of that and most of you probably know it anyway but I wanted to disclose it again to make sure that I'm playing with open cards.
Hi @jsg,
Thank you for taking your time and sharing your valuable feedback and benchmarks. As anticipated, some things are good, some needs to be researched, some needs immediate improvement.
Appreciate the effort and guidance .
And this is why feedback is important. Keep taking notes everyone!
And he does get it, even beyond the benchmark numbers. Simple reason: he once told me "transparency is important to NexusBytes" and he always lived up to that. I reward that with input/feedback and advice as best I can. Win - win; he can grow his company and we LETters get very nice VPSs at very decent prices. Besides @seriesn is a decent man and has earned his good reputation.
Benchmarks, the usual load of bollocks that help 99.9% of users in no valuable way.
Search for reviews on Nexus Bytes which will tell you all you need to know, bypass this useless drivel and all the others that @bsdguy will post in the future.
Thanks for your demonstration of useless drivel (plus some personal hate).
I disagree because there are some factors that people consider. For example, some people weigh network performance very highly (particularly to/from different regions). If you want a system that has a highly performant route to Tokyo or Hong Kong, then benchmarks are very relevant.
I agree that they are not decisive...we've all seen hosts that perform well right out of the gate and then overstuff their boxes, or providers that can't keep their systems up, etc. But benchmarks do serve a purpose. Not only are there plenty of tools out there which are regularly used by our community (e.g., ioping, etc.) but there are a lot of other benchmarks in use (e.g., the venerable bench.sh). I personally weigh reputation very highly but I don't ignore benchmarks.
I know this was a popular conspiracy theory back in the day, but having interacted with both bsdguy and @jsg a lot, I am convinced they are entirely separate people. Since I'm the one who banned bsdguy and interacted with him the most, that should count for something.
I realize the facts that enable these suspicions: @jsg appeared after bsdguy was banned and both work in software development and know a lot about one specialized security-focused area. However, bsdguy couldn't step foot into a thread without starting a flame war and he was even more crazy dealing with him as a mod. I found it very difficult bordering on impossible to have a sane conversation with him.
On the other hand, jsg is polite, rational, and easy to communicate with. Where bsdguy had zero interest in sharing knowledge (only in endlessly and fatuously boasting about the rarely-demonstrated expanse of his own wisdom), jsg has made a lot of posts where he contributes. bsdguy would never have taken the time to develop a benchmark suite and post results here. There were also various personal facts that bsdguy let slip over his time here that jsg does not echo.
I suppose it's possible that bsdguy just started taking his meds or something...but really, beyond some surface coincidences, they're night and day. So if anyone has some proof beyond coincidence, open my eyes, but I consider this speculation dead.
There is an easy answer to this question: no, and even if, no.
This is a fine benchmark if you are looking for a web crawler/data crunching host.
Due to abuse reasons, public sponsored Iperf(3) servers still remain the only way to run anonymous tests for outgoing network speeds. (@masonr 's YABS can fill that hole for the users who want to test outgoing as well. 'servers' deliver data after all. )
Comments:
"web ping" is something I've not yet seen in any other benchmark and it's like a ping but using the relevant protocol. Plus it detects tricks that some providers and carriers play with ping (e.g. advantageous routing). Also keep in mind that there is more to network testing than TCP frame size. Pushing 64 bytes (a typical ping packet size) pushing 1400 bytes (a size that most equipment like routers can pass in one step) or pushing 100 MB makes a very significant difference. And even the question how the OS and the local router slice and handle those packets makes a big difference.
As even this first public benchmark shows my web ping test is useful in more than one regard.
What is vpsbench about anyway? About a first but reasonably extensive impression or, in other words, it offers what people interested in a provider and/or VPS are interested in.
What vpsbench is not is an extensive benchmark of particular performance characteristics like e.g. floating point performance or AES performance.
However: most typical task of an internet server are about pulling, processing, and pushing large amounts of data - and that is what vpsbench accordingly tests for. The performance of your nginx server for example or your file server are not significantly defined by floating point performance or by GPU performance. So vpsbench concentrates on testing what most internet servers boil down to: pulling, processing, and pushing large amounts of data.
Besides providing a good guideline in terms of "what can I expect as a potential buyer of a server?" vpsbench also focuses on looking at "what's under the hood". It for example allows one to find out if what seems to be good disk performance is due to massive caching or due to high quality/fast hardware and smart configuration. This however is less reflected in my published benchmarks because it's less relevant for a "first impression" ("what do I get for my money?"), and it needs well informed use of vpsbench's parameters. Examples for that are download window size or disk round size and number of rounds. I have used vpsbench's full power successfully both on request of providers who want to find out weak spots/spot potential for enhancements as well as for my own curiosity (e.g. to crash different levels of disk caching).
I personally see only one point that leaves me somewhat unhappy and that is the network test target servers. Well noted, vpsbench users can use their own test targets, but doing reviews I'm bound to stay with one target set as far as any possible so as to avoid unfairness. After all we want all tested providers and products tested on a fair and level playing field.