New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
The NVMe unfortunately shows issues at 64k and 256k block sizes. The iops are dropping way too fast. Other solid providers I see have their NVMe iops maintained around 20-30k at 64k and 7-10k at 256k. Definitely abnormal.
The SSD is normal, on par with a top grade consumer SSD like my own Samsung Evo 850.
Hmm, I wonder what could be causing that? For all of the testing and usage I've done, I haven't once felt like there was any performance issues. Booting is quick, apt updates/installs/etc all fly, but I don't try to hammer the disks so perhaps for some workloads this wouldn't be suitable, but for what I use it for it's insanely perfect.
I must qualify that benches are benches, but there are some situations where it may be a concern. For example, you have a busy website that lots of lot of reading and writing to a database per second. These numbers could impact your site's performance negatively in such a case. For personal use, probably you won't notice a thing.
It could also be that the numbers reflect a full node that is being heavily used, and in that case, the numbers are probably acceptable. However, in such a case, I worry about the NVMe drive wearing out much more quickly than expected because there are obviously some people hammering the disks. I just hope people are not doing, ahem, Linux ISOs on the disks.
I really hope nobody's crazy enough to be torrenting on these, it's almost certainly not going to end well if that's happening lol
You're right though, when the drive was stress tested it buckled under pressure, but seems fine otherwise. I don't notice anything wrong as such, everything is fast and the problems mentioned by Daniel, specifically the apt updates taking forever, just don't exist for me. Maybe if I really hammer the drive I could find a problem, but if I'm hammering the drive, I feel like I'm the problem. Same goes for CPU, network, etc imo :-)
It's kinda like trying to redline my car whenever I drive, something is bound to shake loose.
Heh.. I have stress tested repeatedly a couple of other servers and they were fine. Not to say that Flow is bad because I think the disk performance is actually very good for most real world cases, but there are better choices for NVMe performance if that is a mission critical factor. Plus, there are a ton of other factors to consider, e.g. support, location etc, which Flow passes with flying colours.
Well put sir, you certainly are the man when it comes to the certainty of statistical probabilities :-) Your research methodology is something I would never feel qualified to question! :-)
Channeling my inner @uptime..
Maybe sir @trewq of qwert can share a few words on the nvme situation, it's not impossible that he's aware of this or is some known limitation (or unknown if that's how the potassium leans) that could explain or at least contribute to the pondering of some potassium enthusiasts gathered in their place of worship. Margarine hat.
I did extensive testing with the NVMe drives before putting them in production and compared them to other providers in the space. Unfortunately I couldn't match the results from other providers while still keeping a level of redundancy I am satisfied with but I think the current levels are more than acceptable for a production system.
The issue @Daniel15 is talking about was due to pushing the E3 nodes too hard and it took me longer than I'd like to admit to find the issue. As sometimes is the case bugs that don't show up in testing do in production. Mostly sorted now and moving to higher density nodes so issues like this won't happen again.
Thank you everyone who's enjoying this offer, means a lot!
Thanks for the explanation and thanks again for the offer, it's purring along beautifully for me and it's far from idling lol :-)
In my case I have a site that can spike to a few hundred requests per second during peak days of the year. Backend handles it fine (I've load tested to ~4k RPS on one server) but DB reads (in cases where the data isn't cached yet) and writing access/error logs to disk can slow things down if disk I/O is slow. NVME drives help a lot there
Anyways, it sounds like it's not an issue any more (thanks for confirming @trewq), which is great! I might try FlowVPS again once my current Australian VPS plan expires.
Just noticed us lucky preorderers got a month bonus as per the email.
You beauty.
That's because @trewq is prem.
Would love the guide
Which OS are you using and are you using google drive?
Ubuntu and Yes I'm using Google Drive
Awesome. Have you got Plex installed yet?
Yes i have, do you maybe want to continue it in pm ?
Install rclone https://rclone.org/drive/ and connect it to your gdrive, then add that as a library in Plex.
PM me if you prefer