All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Is anyone on Hosthatch's London NVMe nodes experiencing problems?
I have 2 of the smaller NVMe 1Gb, 2Gb (after doubling RAM) nodes on Hosthatch's London nodes and today they have both become very slow even though they are mostly idle.
Is anyone else with those type of nodes in London experiencing problems?
These are the specs I'm referring to:
https://www.lowendtalk.com/discussion/160118/london-chicago-offers-nvme-and-storage/p1
NVMe plans:
1 CPU core (15% dedicated)
512 MB RAM
8 GB NVMe disk (RAID-10)
1 TB bandwidth
$15 per year
Free upgrade with double RAM, double disk, and 5x bandwidth
Free 40 Gbps DDoS protection available on request1 CPU core (30% dedicated)
1 GB RAM
10 GB NVMe disk (RAID-10)
1 TB bandwidth
$30 per year
Free upgrade with double RAM, double disk, and 5x bandwidth
Free 40 Gbps DDoS protection available on request
Comments
I assume you have opened a ticket and have not gotten a timely response before using LET as a helpdesk?
Basically I prefer to run my own tests first to ensure it is not something in my configuration, before I open a ticket.
The problem is I have 2 VMs having problems, all differently configured and when I get the same problem on them I assume that it must be on the host node.
Then I boot a recovery disk to ensure that the problem persists and when it does I know it is not due to stuff I've installed.
It is also 4 am in the morning and I am not ready for back and forths with technical support at this hour. It may not be so late if you are in the US or well east of GMT, but I have spent over 2 hours now taking backups and running tests before coming to check if anyone else is having a problem.
If later in the morning the problem is still present and is not due to abusers on the nodes, I will put in a ticket.
I have HH nodes at other locations and they all fine. These two are hanging on the Geekbench disk test.
I just experienced 3 dd tests averaging about 20MB/s. The 4th one went back up to 128MB/s and the 5th one went down to 12MB/s. These
dd
are in the recovery disk, not the normal installation.I have to go to bed now, but I will put in a ticket so you check which host it is and see if there is some fault there.
If in the morning the problem is still present, we can carry on.
Did you just reboot one of your host nodes? They've gone offline.
Please open a ticket if you need info. This is not our helpdesk.
Seeing as you rebooted the node, you must have noticed the problem.
It is morning now and everything seems to be fine, except the noVNC console.
You could check that as well.
.......
I think it actually worked!
I think in this case it did, I also received a mail about the matter. Another reboot and some more work required in the coming days.So thanks to both @rchurch for flagging and @Abdullah for taking action.
edit:
From Abdullah's message below- I do not mean to imply that the OP's message triggered the corrective action.
That said, I think a ticket followed by helpdesk would be my personal choice... :-)
I think the intention of this thread was misunderstood... from what I see @rchurch did not come here to complain nor seek support.
seems more likely he justed wanted to make sure that he is not the only one with problems before he even involves support and needs their ressources (or tries to fix his unmanaged service himself)
I consider this very reasonable ;-)
Exactly as Falzo says.
No, sorry, but we did not start fixing this after this forum post. This unfortunate assumption is the reason I chose to simply write ‘you should open a ticket if you are seeing a problem’ earlier, because that would be the right way of doing this.
We detected an issue last night and have been working on it since by migrating everyone away from a faulty node in small batches to have the least amount of downtime possible and the least performance affect per VM. We did not start doing this just now.
No node was rebooted. If your particular VM was rebooted, you would have received an email from us. Anyone who opened a ticket inquiring about this was responded to within minutes.
I do not know who the OP is in our system, so I can’t fix any specific issues that he may have unless a ticket is opened by him.
@Abdullah I still thank you and folks at Hosthatch for the prompt action, but I am surprised that you choose to single out my post and get defensive when in my post I actually compliments you. Not the best way to treat a customer who actually gives you a pat on the back.
Nowhere I mentioned that you acted as a result of this forum post by OP but have updated my post with the clarification. (My first line "I think in this case it did") refers to the following conversation:
which could be in response to
Confirming your mail did not mention anything about rebooting the node, but only VMs.
Below is the email I received at 10:49 AM my time (GMT +5.5).
I think people saying ‘it worked’ had the meaning that this forum post was the reason we got around to resolving this, which was not the case. Very sorry for misunderstanding your words.
If you were aware of problems on the node earlier which was likely to affect users, then you should have sent a message earlier, and that would have saved me the time trying to diagnose the problem.
My problem with technical support is that I have noticed with many providers is that tickets are rarely handled by knowledgeable technicians the first time around and you waste time arguing with people who know less than you, and that is why I ensure the problem is not on my end before contacting tech support.
The time I'm going to spend on back and forths is better spent by verifying that the problem is not in my configuration, then when I'm sure I put in a ticket, and even then they will still ask you to go through some steps.
You may not be that kind of provider and TBH I haven't had any problems with your service, other once or twice when some servers failed to boot, and those were promptly resolved.
The simple fact is our actions are coloured by our experiences.