Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Multi-site, Multi-VPS review -> HostDoc
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Multi-site, Multi-VPS review -> HostDoc

jsgjsg Member, Resident Benchmarker

First, Kudos to @HostDoc who actually contacted me a while ago and asked for a benchmark & review. Most of my reviews are based on me offering to benchmark & review and a provider accepting. I took HostDocs invitation to symbolize a solid level of self-confidence and I found that largely justified during my testing.

Intro

I was given 3 VPSs for benchmarking and reviewing. Two "Cure" (KVM) VPSs, one of which is in Europe (France) and the other one in Dallas (if I'm not mistaken). The third VPS is also in Dallas (or Phoenix?) and is a "Hybrid" model (also KVM) and probably particularly interesting for some as it offers some dedicated resources.

I tested all 3 VPSs about a week and ran benchmarks at different times both day and night.

Also note that I have amended my review a bit by also looking at what I call a tight frame as well as at a rather wide frame (both being different depending on the test). This is a nice way to get an idea about a VPS's performance consistency.

The VPSs tested:

Here is the sysinfo of the Paris "Cure" VPS:

Machine: amd64, Arch.: amd64, Model: Intel(R) Xeon(R) E-2136 CPU @ 3.30GHz
Memory.: 1.985 GB
CPU - Cores: 2, Family/Model/Stepping: 6/158/10
Cache: 32K/32K L1d/L1i, 256K L2, 12M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
          pse36 cflsh mmx fxsr sse sse2 ss sse3 pclmulqdq vmx ssse3 fma cx16
          pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline aes xsave osxsave
          avx f16c rdrnd hypervisor
Ext. Flags: fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx
          rdseed adx smap clflushopt syscall nx pdpe1gb rdtscp lm lahf_lm lzcnt

25 GB disk

Here is the sysinfo of the Dallas "Cure" VPS:

Machine: amd64, Arch.: amd64, Model: Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
Memory.: 1.985 GB
CPU - Cores: 2, Family/Model/Stepping: 6/85/4
Cache: 32K/32K L1d/L1i, 1024K L2, 19M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
          pse36 cflsh mmx fxsr sse sse2 ss sse3 pclmulqdq vmx ssse3 fma cx16
          pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline aes xsave osxsave
          avx f16c rdrnd hypervisor
Ext. Flags: fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx pat
          pse36 rdseed adx smap clflushopt clwb sha pku syscall nx pdpe1gb
          rdtscp lm lahf_lm lzcnt

25 GB disk

And here is the sysinfo of the "Hybrid" VPS:

Machine: amd64, Arch.: amd64, Model: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
Memory.: 2.985 GB
CPU - Cores: 1, Family/Model/Stepping: 6/62/4
Cache: 32K/32K L1d/L1i, 256K L2, 25M L3
Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
          pse36 cflsh mmx fxsr sse sse2 ss sse3 pclmulqdq vmx ssse3 cx16 pcid
          sse4_1 sse4_2 x2apic popcnt tsc_deadline aes xsave osxsave avx f16c
          rdrnd hypervisor
Ext. Flags: fsgsbase tsc_adjust smep erms syscall nx pdpe1gb rdtscp lm lahf_lm

2 disks: 30 GB + 60 GB (NVMe)

Unless otherwise noted the disks are highly likely SSDs.

And here is the data:

Paris "Cure" VPS:

Processor/memory:
64 rounds~ 1.00 GB -> **Median 468**,   Min 420, Max    507, 79% within 5%, 95% within 10%
4 times 64 rounds ~ 4.00 GB > **Median 1150**, Min 937, Max 1250, 89% within 5%, 95% within 10%

Disk
Sequential writing -> **Median 248**, Min 203, Max 331, 68% wi 10%, 100% wi 25%
Random writing ->     **Median 183**, Min 155, Max 261, 79% wi 10%, 100% wi 25%
Sequential reading -> **Median  455**, Min 378, Max 880, 79% wi 10%, 100% wi 25%
Random reading -> **Median 416**, Min 361   707, Max 374, 79% wi 10%, 100% wi 25%

Net:

UK_LON   **Median 515.2**, Min 416, Max 540.9, 63% wi 10%, 100% wi 25%
AU_MEL   **Median 17.9**, Min 16.7, Max 18.5, 100% wi 10%, 100% wi 25%
IN_CHN   **Median 20.9**, Min **0.5**, Max 27.0, 53% wi 10%, 53% wi 25%
SG_SGP   **Median 22.0**, Min 9.1, Max 22.5, 95% wi 10%, 95% wi 25%
DE_FRA   **Median 343.8**, Min **7**, Max 379.6, 68% wi 10%, 68% wi 25%
IT_MIL   **Median 159.9**, Min 150.4, Max 183.9, 100% wi 10%, 100% wi 25%
FR_PAR   **Median 397.5**, Min 334.6, Max 412.2, 89% wi 10%, 100% wi 25%
RU_MOS   **Median 112.1**, Min 93.2, Max 116.0, 89% wi 10%, 100% wi 25%
BR_SAO   **Median 21.0**, Min 11.7, Max 24.1, 79% wi 10%, 95% wi 25%
US_DAL   **Median 41.7**, Min 30.3, Max 43.4, 95% wi 10%, 95% wi 25%
US_SJC   **Median 31.6**, Min 29, Max 33.7, 100% wi 10%, 100% wi 25%
US_WDC   **Median 49.9**, Min 38.3, Max 55.6, 95% wi 10%, 100% wi 25%
JP_TOK   **Median 22.8**, Min 20.6, Max 24.1, 95% wi 10%, 100% wi 25%
RO_BUC   **Median 110.7**, Min **1.5**, Max 123.8, 95% wi 10%, 95% wi 25%
GR_UNK   **Median 76.9**, Min 31.7, Max 87.4, 95% wi 10%, 95% wi 25%
NO_OSL   **Median 154.8**, Min 151, Max 164.5, 100% wi 10%, 100% wi 25%

("x% wi y%" means "x% of results are within Median - y%")

Discussion:

The processor is a decent CPU for a relatively cheap VPS and has all interesting flags (like aes or avx) available. The performance is pleasantly high and also quite consistent; deviateions are well within what is normal for a a VPS node and there are no signs for overselling. Obviously the memory is also not oversold and of decent speed (this part of my benchmark seriously stresses both the processor and memory as well as the cache).

The disk is nothing to write home about (I'd call it solid middle class) but its performance is consistent (note that 70% to 80% of all results (tests I did) are within 10% of the median).

The network is also what I'd call solid middle class. Actually I would have given a somewhat better mark if there weren't quite some rather unstable destinations. Some of those are notorious (e.g. Chennai, India) but Frankfurt with about 1/3rd of the tests being outside 25% of the median? From Paris? (For american readers: Frankfurt is basically "just around the corner" for Paris plus it's one of the top-3 european internet exchanges).

As this is getting long (which is to be expected with a review of 3 VPSs) I'll continue in another post a bit later ...

Disclaimer: I am not a customer of HostDoc and HostDoc provided nothing to me (other than the VPSs for testing) nor did they ask for anything particular or promise anything. The software I used is my benchmark program v. 1.02e

Comments

  • jsgjsg Member, Resident Benchmarker

    Part 2, the Dallas "cure" VPS

    CPU/Mem:                                                                            64 rounds~ 1.00 GB -> **Median 353**, Min 327, Max 364, 88% wi 5%, 100% wi 10%
    4 times 64 rounds ~ 4.00 GB -> **Median 766**, Min 687, Max 821, 94% wi 5%, 100% wi 10%
                                            Disk (MB/s):                                                                                Sequential writing -> **Median 321**, Min 291, Max 372, 100% wi 10%, 100% wi 25%
    Random writing -> **Median 335**, Min 299, Max 408, 94% wi 10%, 100% wi 25%
    Sequential reading -> **Median 975**, Min 666, Max 1262 76% wi 10%, 88% wi 25%
    Random reading -> **Median 927**, Min 782, Max 1127, 88% wi 10%, 100% wi 25%
    
    Net (Mb/s):
    UK_LON  **Median 43.4**, Min 31.7, Max 44.9, 100% wi 10%, 100% wi 25%
    AU_MEL  **Median 25.4**, Min 23.4, Max 26.3, 100% wi 10%, 100% wi 25%
    IN_CHN  **Median 19.1**, Min 16.3, Max 19.6, 82% wi 10%, 100% wi 25%
    SG_SGP  **Median 24.6**, Min 13.0, Max 25.7, 88% wi 10%, 100% wi 25%
    DE_FRA  **Median 37.1**, Min 33.0, Max 39.9, 88% wi 10%, 100% wi 25%
    IT_MIL  **Median 34.3**, Min 29.4, Max 37.0, 94% wi 10%, 100% wi 25%
    FR_PAR  **Median 42.1**, Min 39.9, Max 43.4, 100% wi 10%, 100% wi 25%
    RU_MOS  **Median 36.8**, Min 34.2, Max 38.9, 100% wi 10%, 100% wi 25%
    BR_SAO  **Median 32.5**, Min 30.8, Max 33.9, 100% wi 10%, 100% wi 25%
    US_DAL  **Median 787.6**, Min 700.5, Max 835.2, 82% wi 10%, 100% wi 25%
    US_SJC  **Median 111.3**, Min 103.4, Max 122.2, 100% wi 10%, 100% wi 25%
    US_WDC  **Median 136.6**, Min 76.7, Max 160.9, 88% wi 10%, 88% wi 25%
    JP_TOK  **Median 40.6**, Min 24.1, Max 42.3, 94% wi 10%, 94% wi 25%
    RO_BUC  **Median 31.4**, Min 0.5, Max 33.0, 94% wi 10%, 94% wi 25%
    GR_UNK  **Median 24.6**, Min 22.8, Max 27.5, 100% wi 10%, 100% wi 25%
    NO_OSL  **Median 33.8**, Min 32.5, Max 35.0, 100% wi 10%, 100% wi 25%
    

    Discussion:

    Interestingly this system with a Xeon Gold processor with considerably more cache and the same number of cores and same amount of memory like its french "brother" is actually slower than the french VPS sibling. But don't worry, performance is still in the solid upper middle class.

    Its disk, however, is considerably faster than the french siblings and the results just as consistent, in fact even a tiny bit better.

    Another, and quite significant, difference is in the network area where this VPS is clearly more consistent than its french sibling.

    Thanked by 2dahartigan vimalware
  • Hybrid, is in indeed in Phoenix, Gold in Dallas ;)

    Thanked by 1dahartigan
  • jsgjsg Member, Resident Benchmarker
    edited June 2019

    Part 3, the "Hybrid" VPS

    CPU/Mem:                                                                            64 rounds~ 1.00 GB -> **Median 267.1**, Min 253.4, Max 290.1, 0% wi 5%, 0% wi 10%
    4 times 64 rounds ~ 4.00 GB -> **Median 338.6**, Min 320.6, Max 384.3, 0% wi 5%, 0% wi 10%
    
    Disk (MB/s):                                                                                Sequential writing -> **Median 464.9**, Min 420.0, Max 528.2, 100% wi 10%, 100% wi 25%
    Random writing     -> **Median 276.1**, Min 252.3, Max 351.7, 100% wi 10%, 100% wi 25%
    Sequential reading -> **Median 839.3**, Min 704.0, Max 1086.0, 67% wi 10%, 100% wi 25%
    Random reading     -> **Median 749.7**, Min 626.1, Max 983.4, 87% wi 10%, 100% wi 25%
    
    Disk 2:                                                                             Sequential writing -> 1.189 GB/s
    Random writing     -> 1.288 GB/s
    Sequential reading -> 1.297 GB/s
    Random reading     -> 1.260 GB/s                                                                Net:
    
    UK_LON  **Median 35.5**, Min 32.0, Max 36.9, 100% wi 10%, 100% wi 25%
    AU_MEL  **Median 19.4**, Min 15.2, Max 20.1, 93% wi 10%, 100% wi 25%
    IN_CHN  **Median 22.1**, Min 0.4, Max 23.7, 93% wi 10%, 93% wi 25%
    SG_SGP  **Median 28.1**, Min 10.1, Max 29.4, 93% wi 10%, 93% wi 25%
    DE_FRA  **Median 30.7**, Min 16.8, Max 33.8, 73% wi 10%, 87% wi 25%
    IT_MIL  **Median 32.4**, Min 28.0, Max 33.6, 93% wi 10%, 100% wi 25%
    FR_PAR  **Median 34.1**, Min 33.2, Max 37.0, 100% wi 10%, 100% wi 25%
    RU_MOS  **Median 30.1**, Min 0.3, Max 32.1, 80% wi 10%, 80% wi 25%
    BR_SAO  **Median 28.3**, Min 15.3, Max 29.1, 93% wi 10%, 93% wi 25%
    US_DAL  **Median 147.0**, Min 132.2, Max 167.4, 100% wi 10%, 100% wi 25%
    US_SJC  **Median 241.8**, Min 224.7, Max 280.1, 100% wi 10%, 100% wi 25%
    US_WDC  **Median 69.5**, Min 48.7, Max 78.3, 93% wi 10%, 93% wi 25%
    JP_TOK  **Median 49.8**, Min 47.9, Max 50.6, 100% wi 10%, 100% wi 25%
    RO_BUC  **Median 27.7**, Min 13.3, Max 31.2, 93% wi 10%, 93% wi 25%
    GR_UNK  **Median 23.6**, Min 1.9, Max 25.8, 87% wi 10%, 87% wi 25%
    NO_OSL  **Median 32.9**, Min 31.8, Max 33.5, 100% wi 10%, 100% wi 25%
    

    Discussion:

    I'm a bit bewildered by the second disk which, if I'm not mistaken, is not standard. Maybe I somehow fell over a new HostDoc product. Anyway, I did a quick (single) benchmark of that second disk too and found it to be a quite well performing NVMe. Nice, really nice, and even double the size of the first and "main" disk.

    Considering that this VPS is single core it's obviously slower then the dual vCore VPSs but it's performance is quite nice and it also has all the interesting flags active.

    The (primary) disk is roughly in the same league as the other Dallas VPS which means that it's seriously nice and fast. I guess it's an SSD Raid 10.

    While the network isn't the fastest it's still in the HostDoc ballpark. What I noticed here is how consistent it is. Not a speed demon but consistent and reliable it seems.

    Summary and other factors:

    Most of the communication with HostDoc (and all after the testing started) was via their support (tickets). Regarding speed I'd put it again in the solid upper middle class; They usually don't react within minutes but still rather quickly and certainly much quicker than many providers, particularly in the low end segment. And I found them consistently friendly, constructive and helpful.

    Speaking of "consistent": If I had to summarize HostDoc, I'd say that their products are upper middle class (keep in mind that we are in the low end segment here!), but whaT I noticed and valued most is consistency. Those VPSs seem to be ones one can rely on.

    I'd also like to note one point I'm always looking out for and rarely find satisfying: the VNC console. Usually at best there is one provided but set to one (1) keyboard layout and often creating problems. Not so in this case. HostDoc did their homework and it shows that they really care about their customers.

    Now, again, we are in the low end segment here and so I do not expect Gb/s connectivity or Raided NVMEs. Looking reasonably I found the tested HostDoc systems quite nice, certainly nicer than average - and everything was consistent and reliable. Nice! One exception maybe: It seems that they should have a word with some of their network providers. It's not bad, it's well within what can be reasonably expected in this market segment (and sometimes even better) but it's the one thing that (with some products) is not consistent.

    All in all I'm not jumping up and down excitedly but I'm pleased, really pleased with what I saw and experienced. I think that HostDoc should be on the short list of good providers here at LET.

  • AlwaysSkintAlwaysSkint Member
    edited June 2019

    Thanks for taking the time to do this. Subjectively, I find your overall analysis to be similar to my experience, though rate my interaction with @HostDoc higher (upper league).
    I've been running a Dallas "Gold" for a few weeks and the only concern - though not crucial - is the higher than "normal" steal that I've seen on other platforms. My Oz VPS is still a work in progress and my hybrid is idling too much to be commented on.

    Similar Website Traffic, both 2 vCPU
    OVH Cloud VPS
    [IMG]http://i63.tinypic.com/10durev.png[/IMG]
    HostDoc Gold
    [IMG]http://i68.tinypic.com/icstiv.png[/IMG]
    (Sorry, couldn't fathom out markdown embedded pics - life's too short!)

    Thanked by 2dahartigan HostDoc
  • jsgjsg Member, Resident Benchmarker

    @AlwaysSkint said:
    Thanks for taking the time to do this. Subjectively, I find your overall analysis to be similar to my experience, though rate my interaction with @HostDoc higher (upper league).
    I've been running a Dallas "Gold" for a few weeks and the only concern - though not crucial - is the higher than "normal" steal that I've seen on other platforms. My Oz VPS is still a work in progress and my hybrid is idling too much to be commented on.

    Oh that's probably a misunderstanding. My verdict is general, not particularly for the low end segment. Here in this segment HostDoc is doubtlessly upper class.

    Thanked by 1dahartigan
  • AlwaysSkintAlwaysSkint Member
    edited June 2019

    In terms of Live Chat, my assessment doesn't take account of spend; getting pretty sick of canned responses from others, both past & present. ;-)

    Thanked by 1dahartigan
  • Thanks for the detailed review @jsg :)

    I can say that before I joined HostDoc as a support team member, my experience mirrors what you said. Consistent and reliable are a great combination, and you can tell the Doc puts in the effort.

    Sometimes there are network issues like you'll find with most providers, this is largely due to the network being out of the Doc's control.

    I'm glad people are starting to take notice of what the Doc's offering, I discovered them a while ago and was very impressed - Doc works very hard at keeping things running smoothly and trying new products that customers will enjoy.

    He works so hard it inspired me to offer my services to him for free even while I'm a paying customer to join the team to help with the live chat.

    That's just my 2 cents from a somewhat unique and as-unbiased-as-possible perspective.

    Thanked by 1HostDoc
  • @jsg said:

    Thanks.
    I approached you because I thoroughly enjoy reading your reviews and have enjoyed reading this one just as much.
    I had figured you'd test them for near a month+ so was quite surprised to see this thread.

    I'd like to clear something up. The hybrid Dual Storage is not NVMe or SSD. It is a HDD disk for added storage capacity for the dedicated VPS.

    Regarding tickets, that is unfortunately the downside of our service and can take a few hours for a response at times and as is evident from a previous thread started where a client could not wait 19 minutes, the wait time is totally unacceptable, hence, the live chat.
    Live chat will see you responded to in a matter of seconds.

    We have just placed 3rd as most responsive host on LET, so despite the extended ticket response times, we make sure a response is always forthcoming that addresses the request individually and uniquely.

    I appreciate the time you have taken to review our products and write this review.

  • In HostDoc, we're not their customers, but a family.

    Thanked by 2dahartigan HostDoc
  • who are you?

  • ITLabsITLabs Member
    edited June 2019

    @dedicados said:
    who are you?

    LET, we come in peace!

    Thanked by 1dedicados
  • jsgjsg Member, Resident Benchmarker

    @HostDoc said:
    I had figured you'd test them for near a month+ so was quite surprised to see this thread.

    Normally I would have ran my tests/benchmark much longer but in your case it simply made no sense. Reason: consistency. All three VPSs I tested were consistently performing (and well).

    So my not testing your VPSs any longer is to be taken as a compliment!

    I'd like to clear something up. The hybrid Dual Storage is not NVMe or SSD. It is a HDD disk for added storage capacity for the dedicated VPS.

    Well, in my benchmark I got the impression that it's an NVMe. Performance figures of consistently about 1.2 GB/s clearly suggested that. And frankly, I don't even care. What I care about is the performance I get. I'm impressed by what you managed to build and configure there and that means something because the disk test part of my benchmark is designed to break through smart tricks like funny caching often encountered. Seriously, kudos for that, you did it really well.

    Regarding tickets, that is unfortunately the downside of our service and can take a few hours for a response at times and as is evident from a previous thread started where a client could not wait 19 minutes, the wait time is totally unacceptable, hence, the live chat.
    Live chat will see you responded to in a matter of seconds.

    It seems we have a misunderstanding. I'm not complaining and I did not have the impression that your support is slow. My point was that while your support isn't superfast it is friendly, helpful, and constructive. That in my eyes is more important than being even faster.

    But it's good that you reacted because I didn't try your chat support. Probably I'm getting old and boring but I only tried your ticket based support. No matter, your support is on the positive side anyway.

    Another important point is my (hopefully well working) sense of reality. After all this is LET - the low end of the VPS universe - and it would seem strange to me to expect first class and very fast reaction times. On the other hand I don't want to base myself on "low end testing and low end expectations"; maybe it's just me but what I'm actually really interested in is finding the select few products that have low end price tags but actually deliver a above-low-end service and experience, so my reference frame is always based on the full frame as opposed to "oh well, not much to expect from a cheap product". Keeping that in mind my review of your VPSs actually is really positive!

    I appreciate the time you have taken to review our products and write this review.

    Thank you for that because it is indeed a lot of time and work that goes into my reviews although most of it isn't visible here. The big part isn't even getting lots of samples but to analyze them and to arrive at an honest real world result set (especially for me because I'm not a spreadsheet guy).

    Btw, that "it's not an NVMe" got me a bit triggered and I'd like to change my benchmark software a bit and try to go even harder on the second disk, if you'd be generous enough to keep my access to the "hybrid" VPS open a bit longer (a week or two maybe).

    Thanked by 2HostDoc dahartigan
  • @jsg said:

    Btw, that "it's not an NVMe" got me a bit triggered and I'd like to change my benchmark software a bit and try to go even harder on the second disk, if you'd be generous enough to keep my access to the "hybrid" VPS open a bit longer (a week or two maybe).

    All 3 VPS are still active and so is your account. You are free to stress it as much as you want. It is a dedicated dual storage VPS so stress test it 24/7 and run that core to the ground for a month non stop if you want.
    To clarify, the AZ dedicated Dual Storage has a primary SSD disk (30GB) and a secondary HDD disk (60GB - You called this NVMe).
    So yes, there is an SSD drive in the mix but only as the primary storage on which your OS will run.

    Regarding the rest of your review, I did not misunderstand nor did I take any of it negatively. Sorry if we have both discombobulated each other :smiley:

    I am pointing out that yes, we do respond in a timely fashion appropriate for the low end market but we also massively improve on this response time if live chat is used.
    The example used was simply outlining that despite our prices and a 19 minute ticket response time, somebody here (a previous client) seemed to want and expect more than that and proceeded to create a thrash thread because a response took 19 minutes.

    Once again, thanks for your time and effort in reviewing our service.

  • jsgjsg Member, Resident Benchmarker

    @HostDoc said:
    All 3 VPS are still active and so is your account. You are free to stress it as much as you want. It is a dedicated dual storage VPS so stress test it 24/7 and run that core to the ground for a month non stop if you want.

    Thanks a lot - also for your appreciation!

    But as I'm almost paranoically correct with my reviews I want to clearly state that I will do further tests only with the "Hybrid VPS". Please kindly delete/rebuild/whatever the two "Cure VPS".

    ... 19 minute ticket response time, somebody here (a previous client) seemed to want and expect more than that and proceeded to create a thrash thread because a response took 19 minutes.

    Sorry, complaining about any ticket not being responded to within anything < 1 hour is simply idiotic. In fact I'd consider even 2 hours reponse time really good and anything up to 6 hours as perfectly acceptable.

    From what I see so far you have quite some (well deserved) fans around here and frankly, you shouldn't even care about an obvious idiot complaining about 19 min. response time.

    Thanks again for the opportunity to do those tests and to dig a bit deeper into the "Hybrid" VPS as well as for your very constructive reaction to my review!

  • jsgjsg Member, Resident Benchmarker

    -- UPDATE --

    @HostDoc just continues to amaze me (positivly).

    As they generously (Thanks, HostDoc!) provided their "Hybrid" VPS a bit longer than I had planned so as to allow me to test that amazing product (that's not courtesy talk. I mean it) a bit more extensively, I changed my benchmark software and it now knows some new tricks (v. 1.03b).

    Before I present the result I first need to deviate a bit and explain a bit about my benchmark program so that the result can be understood properly.

    Vpsbench has been written because I missed some things that are important to me, namely

    • good VPS netizenship. Explanation: We are not alone on a VPS and if we do meaningfull benchmark tests (with any of the other benchmarks I know of) we basically behave ignorantly and damage the experience of the other users on the tested node.
      So I designed vpsbench in a way that it works in small (short) "slices" and to wait a bit (some milliseconds) in between.
      That's also were I made the first change. Until know the waiting time in between slices was 20 ms which is a very reasonable period and more or less reflecting an approach of "I work only during my time slot on the node so as to not disturb the other users".
      Unfortunately though sometimes friendlyness just doesn't cut it, e.g. with dedicated vCores but also when the node has aggressive caching in place.

    • Accuracy and precision. On a modern Unix system, especially on VPS nodes, very many things happen in fractions of seconds. So running for example dd as a "disk benchmark" is between unsatisfying and idiotic as the result you get mean very little and can change from run to run. At the same time modern Unices not only have high resolution timers but there are also very cheap variants which basically just read a processor register. So I built a micro-second resolution timer (,tested it extensivly) and used it in my benchmark. This allows me to use small test slices and still have way more accurate and reliable timings than most benchmarks.

    • and now I also have more flexibility due to some additional functionality, in particular a way to use different test sizes for disk tests instead of only one (16.000 slices/reads/writes of 16 KB each). This adds quite some possibilities, one of which is to see where a cache overflows/crashes as well as the OS's reaction (e.g. just giving up or stupidly rewrite and rewrite...).

    And, BANG, I brought the "Hybrid" VPS's second disk to its knees or, more precisely, to read and write speeds around 100 MB/s (which btw. while not being great still isn't really bad).

    And it happens at 256k slices/reads/writes which equates to 4 GB per test (there are four so a total of 16 GB were written/read). Also I completely disabled the waiting in
    between slices which is relevant because those periods can also serve as recovery (or write back, or ...) time for a very well set up system.

    Guys, that's hefty! That's far beyond some Raid controller caches. Being at that I can exclude a controller cache also because the performance stays within a very tight frame from 1B up to 4 GB. With a controller cache there almost always would be "staircase" pattern.

    So, what's the secret? I guess HostDoc has provided ample (and not slow!) RAM and either their sysadmin or the OS itself allocated 4 GB of RAM on the node level (my VPS has only 3 GB anyway and I tested carefully around 3 GB).

    TL;DR -> There is some interesting story behind it but the result is that that 2nd disk basically feels as if it were an NVMe as long as you dont write larger files than 4 GB.

    Amazing. Very well done, HostDoc! Kudos.

  • @jsg said:
    -- UPDATE --

    @HostDoc just continues to amaze me (positivly).

    As they generously (Thanks, HostDoc!) provided their "Hybrid" VPS a bit longer than I had planned so as to allow me to test that amazing product (that's not courtesy talk. I mean it) a bit more extensively, I changed my benchmark software and it now knows some new tricks (v. 1.03b).

    Before I present the result I first need to deviate a bit and explain a bit about my benchmark program so that the result can be understood properly.

    Vpsbench has been written because I missed some things that are important to me, namely

    • good VPS netizenship. Explanation: We are not alone on a VPS and if we do meaningfull benchmark tests (with any of the other benchmarks I know of) we basically behave ignorantly and damage the experience of the other users on the tested node.
      So I designed vpsbench in a way that it works in small (short) "slices" and to wait a bit (some milliseconds) in between.
      That's also were I made the first change. Until know the waiting time in between slices was 20 ms which is a very reasonable period and more or less reflecting an approach of "I work only during my time slot on the node so as to not disturb the other users".
      Unfortunately though sometimes friendlyness just doesn't cut it, e.g. with dedicated vCores but also when the node has aggressive caching in place.

    • Accuracy and precision. On a modern Unix system, especially on VPS nodes, very many things happen in fractions of seconds. So running for example dd as a "disk benchmark" is between unsatisfying and idiotic as the result you get mean very little and can change from run to run. At the same time modern Unices not only have high resolution timers but there are also very cheap variants which basically just read a processor register. So I built a micro-second resolution timer (,tested it extensivly) and used it in my benchmark. This allows me to use small test slices and still have way more accurate and reliable timings than most benchmarks.

    • and now I also have more flexibility due to some additional functionality, in particular a way to use different test sizes for disk tests instead of only one (16.000 slices/reads/writes of 16 KB each). This adds quite some possibilities, one of which is to see where a cache overflows/crashes as well as the OS's reaction (e.g. just giving up or stupidly rewrite and rewrite...).

    And, BANG, I brought the "Hybrid" VPS's second disk to its knees or, more precisely, to read and write speeds around 100 MB/s (which btw. while not being great still isn't really bad).

    And it happens at 256k slices/reads/writes which equates to 4 GB per test (there are four so a total of 16 GB were written/read). Also I completely disabled the waiting in
    between slices which is relevant because those periods can also serve as recovery (or write back, or ...) time for a very well set up system.

    Guys, that's hefty! That's far beyond some Raid controller caches. Being at that I can exclude a controller cache also because the performance stays within a very tight frame from 1B up to 4 GB. With a controller cache there almost always would be "staircase" pattern.

    So, what's the secret? I guess HostDoc has provided ample (and not slow!) RAM and either their sysadmin or the OS itself allocated 4 GB of RAM on the node level (my VPS has only 3 GB anyway and I tested carefully around 3 GB).

    TL;DR -> There is some interesting story behind it but the result is that that 2nd disk basically feels as if it were an NVMe as long as you dont write larger files than 4 GB.

    Amazing. Very well done, HostDoc! Kudos.

    Thanks jsg

    I was slightly busy when this was posted yesterday.

    100MB/s sounds about right for our spinning rust and yes, although maybe not the best, it gets the job done as a secondary storage for the VPS.

    I am glad you find it performant, that is always the aim here at HostDoc, to deliver performance and stability at budget friendly prices.

    The secret you ask? Well for one and as you noticed, the RAM is wholly dedicated to your instance and that instance alone. It is never shared or overallocated as too is the thread.
    The rest is down to the manner in which the node is set up and prepared.

    I am very happy you are pleased with your extended results and once again, I am very shocked that your testing has already finished.
    At least this time, your attempt brought your VPS to its knees without a single impact on the host node nor disturbance to any of your neighbours. Most of whom are either mining ( :grey_question: ) or doing something intensive as their instances have been locked at 100% since purchase without wavering.

    Once again, thanks so much for your insight which I find extremely valuable.
    If it is of interest to you, you can also run your test on our Xeon Gold dual storage which is extremely close to being full. That would be a very intriguing test to see, especially for the clients whom are already on it as there are a lot.

    As I have not seen a ticket or had a live chat request to pull the VPS down, I am guessing you are still testing and this was maybe an interim observation?

    Regards.

  • @HostDoc said:
    any of your neighbours. Most of whom are either mining ( :grey_question: ) or doing something intensive as their instances have been locked at 100% since purchase without wavering.

    Aren't your KVMs locked to a certain percentage of real cpu thread to limit neighbour abuse?
    I wish people would learn that this is simply a terrible place to mine their pigeonpoopcoin.

  • @vimalware said:

    @HostDoc said:
    any of your neighbours. Most of whom are either mining ( :grey_question: ) or doing something intensive as their instances have been locked at 100% since purchase without wavering.

    Aren't your KVMs locked to a certain percentage of real cpu thread to limit neighbour abuse?
    I wish people would learn that this is simply a terrible place to mine their pigeonpoopcoin.

    Yes, the majority of our nodes have CPU limitations but the particular node being tested has dedicated VPS' meaning that clients are free to use their entire thread and sustain the usage 24/7 without intervention.

    Thanked by 1vimalware
  • jsgjsg Member, Resident Benchmarker

    @HostDoc said:
    I am very happy you are pleased with your extended results and once again, I am very shocked that your testing has already finished.
    At least this time, your attempt brought your VPS to its knees without a single impact on the host node nor disturbance to any of your neighbours. Most of whom are either mining ( :grey_question: ) or doing something intensive as their instances have been locked at 100% since purchase without wavering.

    Keep in mind (as I said above) that not-disturbing-neighbours while still getting solid and reliable results was one of my major goals when designing my benchmark software!
    But as can also be seen I've extended my software a bit to go really hard if the VPS (dedicated) and the provider allow for that as in your case.

    Once again, thanks so much for your insight which I find extremely valuable.

    You are very welcome. Due to the quality of your product as well as your positive attitude and interest it was a pleasure to do those tests.

    If it is of interest to you, you can also run your test on our Xeon Gold dual storage which is extremely close to being full. That would be a very intriguing test to see, especially for the clients whom are already on it as there are a lot.

    You bet! Of course I take that opportunity.

    As I have not seen a ticket or had a live chat request to pull the VPS down, I am guessing you are still testing and this was maybe an interim observation?

    Yes, I'm indeed not yet done with my testing the "Hybrid" VPS but frankly, my "hot" question is answered and I don't expect any significant fluctuations. If you are OK with that I'd like to keep and test that "Hybrid" for a couple more days.

    Thank you.

  • @jsg said:

    Sure, I will have it set up for you you shortly.
    Please bear in mind the Gold is fair share so the vcores will not dedicated.

    Thanked by 1jsg
  • jsgjsg Member, Resident Benchmarker

    --- New benchmark and review ---

    Meanwhile I benchmarked yet another @HostDoc VPS. This time it's a "DSX112 Dual Storage" one in Austin, TX (according to the map in the panel) with an intel Xeon Gold 6128 processor @ 3.40GHz and 4 GB memory and 2 disks (30 GB and 60 GB).

    CPU - Cores: 2, Family/Model/Stepping: 6/85/4
    Cache: 32K/32K L1d/L1i, 1024K L2, 19M L3
    Std. Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
              pse36 cflsh mmx fxsr sse sse2 ss sse3 pclmulqdq vmx ssse3 fma cx16
              pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline aes xsave osxsave
              avx f16c rdrnd hypervisor
    Ext. Flags: fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx pat
              pse36 rdseed adx smap clflushopt clwb sha pku syscall nx pdpe1gb
              rdtscp lm lahf_lm lzcnt
    
    proc/mem/performance test single core ->  351.55 MB/s
    proc/mem/performance test multi-core  ->  755.36 MB/s
    

    The performance relation suggests that those are either dedicated vCores or ones on a very well managed node. And it's nice that performance; not as good as some other HostDoc systems but certainly on the good side although I would have expected more from a Xeon Gold.

    Now, to the disks.

    Drive 1 (in MB/s)

                        Median    Min       Max     Dev/Min  Dev/Max        
    Sequential writing  317,26    289,36    372,11  8.79    17.29       
    Random writing      236,34    154,64    252,21  34.57   6.71        
    Sequential reading  771,51    627,76    887,79  18.63   15.07       
    Random reading      762,04    494,03    866,43  35.17   13.70       
    

    Seems to be SSD based or a fast spindles array.The relatively high Deviation(Min) value for random writing is normal, the high random reading deviation(min) value though is a bit strange.

    Drive 2 (in GB/s)

                        Median   Min    Max   Dev/Min   Dev/Max
    Sequential writing  1,83     1,70   1,95    7.14   6.92
    Random writing      1,77     1,38   1,85    22.22  4.12
    Sequential reading  1,87     1,61   1,93    13.83  3.08
    Random reading      1,77     1,67   1,86    5.65    5.31
    

    Wow! Impressive result, probably a NVMe, and quite stable performance, too. As expected performance drops when working with 32 KB slices but just a bit under 1 GB/s and even with a really large slice size of 64 KB performance considerably drops to - a still very tolerable - 450 - 500 MB/s. Keep in mind that 64 KB are 16 times the sector size of modern drives, so it's not at all surprising that drives get slower beyond 4KB (most drives are optimized for 512B and 4 KB writes/reads).

    What I find quite strange though is that the 2nd and bigger disk (the "data/storage drive") is much faster than the primary one ("OS disk"). Normally and sensibly it's the other way; you get a smaller but faster disk for the OS and a larger but slower one for data storage.

    Finally to the network.

    AU_MEL: -> 24.7 Mb/s
    IN,CHN: -> 19.6 Mb/s
    SG_SGP: -> 25.1 Mb/s
    DE,FRA: -> 39.6 Mb/s
    IT,MIL: -> 34.2 Mb/s
    FR,PAR: -> 40.1 Mb/s
    RU_MOS: -> 36.5 Mb/s
    BR,SAO: -> 33.0 Mb/s
    US_DAL: -> 430.2 Mb/s
    US,SJC: -> 131.0 Mb/s
    US,WDC: -> 147.4 Mb/s
    JP,TOK: -> 40.3 Mb/s
    RO,BUC: -> 32.6 Mb/s
    GR,UNK: -> 26.9 Mb/s
    NO,OSL: -> 31.6 Mb/s
    

    Those are quite respectable results. The big 3 in Europe are around 40 Mb/s and, maybe even more remarkable, "exotic" targets like Chennai and Tokio show really acceptable results for a relatively low priced VPS.

    Summary: very well done, HostDoc, but you might want to have a look at your disk config; fast OS disks plus slower storage disk seems more reasonable that the other way round.

    There is still one benchmark/review in the works but I want to say a big Thank you! to HostDoc who didn't simply provide me with access to test system but almost threw more and more systems at me. What an experience, thank you very much!

    My resumee so far: I would still wait to call HostDoc a premium provider because, frankly, LET is a place for low-end providers and not for premium ones. But my experience with HostDoc was really positive and there is very little I dislike. In fact I wouldn't hesitate to get a HostDoc VPS even for professional/business use - and that's a strong statement for myself (I expect mediocre to "not bad" systems and with some luck acceptable support at best at LET). So, the bang for the buck ratio with HostDoc is excellent.

    (P.S. Sorry for the poorly formatted disk test results but I don't have the patience to fight the Vanilla software)

  • @jsg I have said it in my initial message to you inviting you to test our VPS and I will say it again, I absolutely love reading your reviews!

    Regarding the Gold, it is currently nearing full capacity (or what we call full).
    You had fair share vcores and not dedicated ones.
    All of our dual storage plans will always be either a SSD or NVMe primary drive with a HDD secondary drive, in this case, it was NVMe and HDD.
    Your OS is on NVMe while the secondary storage is HDD. I will need to check your build but doubt an error was made during the build.

    Thank you very much for your extended review on our Gold which is by no means an empty node. It is great it can still feel dedicated with the amount of neighbours you currently have there.

    Thanked by 2jsg dahartigan
  • jsgjsg Member, Resident Benchmarker
    edited August 2019

    @HostDoc said:
    @jsg I have said it in my initial message to you inviting you to test our VPS and I will say it again, I absolutely love reading your reviews!

    Regarding the Gold, it is currently nearing full capacity (or what we call full).
    You had fair share vcores and not dedicated ones.
    All of our dual storage plans will always be either a SSD or NVMe primary drive with a HDD secondary drive, in this case, it was NVMe and HDD.
    Your OS is on NVMe while the secondary storage is HDD. I will need to check your build but doubt an error was made during the build.

    Thank you very much for your extended review on our Gold which is by no means an empty node. It is great it can still feel dedicated with the amount of neighbours you currently have there.

    First, thanks for your kind words. It feels good to see that the work I invested in both designing and implementing what I consider a useful benchmark software as well as in writing my reviews (behind which is quite a bit of work) appreciated.
    Frankly, I was wondering a bit why on earth my work got next to no attention while some small sh_t got lots of attention, so it's really nice to finally see some interest and reaction here.

    As for that node, yes it feels at least quite close to one with dedicated vCores and now that I've been informed that it's actually quite "full" I'm even more impressed.

    Re the disks however it seems you really got it mixed up. My primary disk was considerably slower than the secondary and larger one. Probably a technicians error. Things like that can happen. What I find more important anyway is how fast and well you correct errors. I experienced that on another test system that was officially dual disks but actually had just one: within about half an hour after informing your support via a ticket I had the second disk attached. I mention that because I think that many of us are too focussed on benchmark numbers only. Experience however teached me that good support and certain details (like a properly working remote console in the panel) can make the difference between a poor and a good experience.

    Thanks again for appreciating my work and soon the next benchmark and review will be online ...

    Thanked by 2ITLabs HostDoc
  • @jsg said:
    First, thanks for your kind words. It feels good to see that the work I invested in both designing and implementing what I consider a useful benchmark software as well as in writing my reviews (behind which is quite a bit of work) appreciated.

    Thanks for the time you put on this. Your reviews are always detailed and very well written.

    Thanked by 1jsg
  • @jsg I have checked your build and can confirm with 100% certainty that your VPS has been configured with NVMe as the primary disk.
    The HDD performance on the node has always been above par for a HDD and still is but it is not faster than the NVMe or anywhere close. Are you sure you did not mix up the disk results?

    Our main niche has always been support. It is what we are good at. It may not always be swift but it will be efficient.

    Regarding your work, it has been noticed for sure and by more than just me no doubt.

    Thanked by 1jsg
  • jsgjsg Member, Resident Benchmarker

    @HostDoc

    I checked again and I still find 30 GB spindles (or SSD) a primary drive and 60 GB NVMe as secondary drive. No big deal but I found it strange and noteworthy.

  • 30GB
    60GB???

    The Gold has
    40GB
    80GB

    So, I checked and the only config with 30GB ssd and 60GB hdd is the arizona node with hostname "Testing-dual-storage".
    If this is indeed the node you have been testing then it is not the Gold but rather the E5-2680v2 which is in fact fully dedicated.
    The Golds hostname is "Gold-Testing" for clarification.
    But, your build still shows SSD as your primary drive and HDD (60GB) as the secondary.

    Please double check your hostnames and feel free to pm me to confirm the nodes they are on because this could be quite confusing if you are calling this a review for the Gold if it is infact our Arizona E5.

  • jsgjsg Member, Resident Benchmarker

    @HostDoc

    No need to worry. It's possible that I mixed up something with the disks but it's not possible that I reviewed the wrong system because my software provides the data and those clearly say "Xeon Gold 6128 CPU @ 3.40GHz". I apologize for any confusion I might have created.

    Thanked by 1HostDoc
  • I recently grabbed one from HostDoc for my Nextcloud. To be honest, the best speed and stability I have received from their servers compared to others like Hetzner's 32G i7. Their support is also great and I'm really amused and so happy to have servers purchased from them. I also have their DE and SG NVMe servers that I use for VPN.

    Thanked by 1HostDoc
Sign In or Register to comment.