Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


HostDoc AU down and out
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

HostDoc AU down and out

corbpiecorbpie Member
edited January 2019 in Outages

Any one else had their HostDoc AUS vps drop out? They are meant to be doing works on their Germany range but not AUS.

It has been a troubled start for a provider claiming 99.9% uptime and reliable vps

Ever since getting services with HostDoc a couple of months back i have seen more downtime and "node upgrades" than other hosts have had over 4 years. Growing pains? or a poor provider?

Thanked by 1blackbird
«13

Comments

  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    Oh crap, the server status page is down too:
    https://status.hostdoc.co.uk/

  • deankdeank Member, Troll
    edited January 2019

    I am very tempted to yell "the end is nigh".

    But it's probably nothing. if OP was having big PMS, I would have yelled "THE END IS NIGH" though.

  • Now they tell me i was using 300% cpu, i had PHP and MySQL running. Hmmmm no warnings or graphs.

    Thanked by 2blackbird pluush
  • FAT32FAT32 Administrator, Deal Compiler Extraordinaire

    Seems like the status page was fixed, thanks @HostDoc!

    My Singapore VPS is running fine, perhaps @corbpie should drop them a ticket or live chat before posting a thread on LET.

    Thanked by 1HostDoc
  • andrew1995andrew1995 Member
    edited January 2019

    Their SG nodes is also having problems too.

  • Yea i have been in contact with them, but the whole thing is very very strange.

  • deankdeank Member, Troll

    All can be explained by PMS. It happens once in a while, let it be creatures or machines. No one can avoid it.

    Thanked by 1pluush
  • eoleol Member
    edited January 2019

    https://status.hostdoc.co.uk/
    100% straight.

    EDIT2:
    502 Bad Gateway now.

  • SplitIceSplitIce Member, Host Rep

    Status page is up with no outage reported hmm.

    Possibly getting DDoS'ed? I see status.hostdock.co.uk is going through Voxility now.

  • andrew1995 said: Their SG nodes is also having problems too.

    What problems? Mine just fine

    Thanked by 1HostDoc
  • @deank said:
    All can be explained by PMS. It happens once in a while, let it be creatures or machines. No one can avoid it.

    I install PMS on hostdoc Singapore. I was scanning libraries from Google drive for a few days, Movies and TV series, and suddenly used up all my bandwidth allocation. I haven't even started watching anything yet.

    Wait, we are talking about the same PMS right?

  • I've had two instances where the AU server became unresponsive or very slow, but it lasted no longer than half an hour each time.

    Even during those two incidents the VPSs (I currently have 3 on the same node) all responded to ping.

    Definitely give live chat a try though, they're highly responsive and will definitely be upfront to you without any bs. That's my experience anyway.

    Can't comment on other nodes as I've only deployed in the AU node since the deal I got for a resource pool was too good to pass up :) I use other providers for other locations, HostDoc is home to my AU stuff (I live in Australia so most of my overall services are here)

    As far as I know they have a single node in Australia but two in SGP.

    Thanked by 2eol HostDoc
  • Just seen this thread.

    No, AU is not down and has never actually gone down since inception. @corbpie you were very quick to run here when we answered your ticket within 10 minutes of you opening it.

    Yes, we killed your instance and for good reason. You are in a shared environment where users expect fluidity in their use of the service. We had been monitoring your usage and though we allow spikes in usage as explained to you in the ticket, we do not allow outright abuse and a single instance using 300% CPU usage with 3 vcores by quite a margin in my books.

    We did not kill your instance until it caused the load of the host node to shoot from a mid 2 to a high 10. At that point your instance was killed, no notification, no warning.

    We had watched your usage bouncing for some time and hoped you would lower it without intervention which never happened and our hands were forced.

    Whatever downtime you have seen would have been caused by yourself and not us, if it is our fault, we would announce it.

    I am quite intrigued by how quickly you ran here following a ticket that by timestamps was open for about 6 minutes yet you only find bad points about our service?
    How about you unethically obtaining a purchase link from a fellow user and attempting to purchase their special plan? We contacted you, explained the issue and immediately reimbursed your funds which was used for future payments.
    How about mentioning you have never opened a ticket that was not responded to in less than 20 minutes?
    How about the fact that if you are not causing it, you have never expeienced a problem with our servers?

    I am very happy to have you and delighted to provide a service I doubt you can replicate elsewhere in our price bracket but have we been so bad that your first move was to run here and attempt to tarnish our name for your own actions?

    You opened a ticket. Maybe next time, await the response before running here for no valid reason other than something you caused yourself.

  • @yokowasis said:

    @deank said:
    All can be explained by PMS. It happens once in a while, let it be creatures or machines. No one can avoid it.

    I install PMS on hostdoc Singapore. I was scanning libraries from Google drive for a few days, Movies and TV series, and suddenly used up all my bandwidth allocation. I haven't even started watching anything yet.

    Wait, we are talking about the same PMS right?

    Would you like some extra BW for free?
    We do mention in our terms should you need it when BW is running low, you can contact support for an additional 50GB boost free.
    Try it out some time. :wink:

    Thanked by 1vimalware
  • jackbjackb Member, Host Rep
    edited January 2019

    before running here for no valid reason other than something you caused yourself.

    I think that's a little on the harsh side. It sounds like you didn't attempt any other host level mitigations (e.g. renice, cgroup throttling, as mentioned below by @eol - cpulimit, reboot the vm etc) before just killing the VM. And you did that without any communication.

    My view is you should take this as an experience to learn from rather than just blaming your customer.

  • eoleol Member

    @jackb said:

    before running here for no valid reason other than something you caused yourself.

    I think that's a little on the harsh side. It sounds like you didn't attempt any other host level mitigations (e.g. renice, cgroup throttling, cpulimit) before just killing the VM. And you did that without any communication.

    My view is you should take this as an experience to learn from rather than just blaming your customer.

    Thanked by 1pluush
  • @jackb said:

    before running here for no valid reason other than something you caused yourself.

    I think that's a little on the harsh side. It sounds like you didn't attempt any other host level mitigations (e.g. renice, cgroup throttling) before just killing the VM. And you did that without any communication.

    My view is you should take this as an experience to learn from rather than just blaming your customer.

    Maybe a little harsh, but my point being this is something that could have been handled at our help desk rather than coming here.

    He had not even allowed us to respond before rushing here which in itself was a bit harsh.

    I think different host provide their services in a manner that fits them best. As a budget host, we have a fair number of clients on our nodes so when abuse is detected constantly, the kill button is pushed to prevent the abuse affecting other clients.

    Thanked by 1coreflux
  • jackbjackb Member, Host Rep
    edited January 2019

    @HostDoc said:
    I think different host provide their services in a manner that fits them best. As a budget host, we have a fair number of clients on our nodes so when abuse is detected constantly, the kill button is pushed to prevent the abuse affecting other clients.

    Ok, but at least tell your clients when you do that then. Two clicks to send an email would have prevented this thread.

    Thanked by 1uptime
  • @jackb said:

    @HostDoc said:
    I think different host provide their services in a manner that fits them best. As a budget host, we have a fair number of clients on our nodes so when abuse is detected constantly, the kill button is pushed to prevent the abuse affecting other clients.

    Ok, but at least tell your clients when you do that then. Two clicks to send an email would have prevented this thread.

    Very true. We will improve this for sure.

    Said point would also be more valid if he had opened this thread prior to opening a ticket that was responded to immediately.

    Patience is a virtue.

  • @HostDoc

    Lets go through it shall we.

    when we answered your ticket within 10 minutes of you opening it

    no you didnt

    Yes, we killed your instance and for good reason. You are in a shared environment >where users expect fluidity in their use of the service. We had been monitoring your >usage and though we allow spikes in usage as explained to you in the ticket, we do not >allow outright abuse and a single instance using 300% CPU usage with 3 vcores by quite >a margin in my books.

    As i told you in the ticket i use the vps for PHP and MySQL, i got no warnings of mass CPU use and since my servers return i have not seen anything close to 300% usage. If it did infact spike to 300% as you claim (with no graphs) then why no wanring for me to check for a rougue service?

    If you manually shut it down why did it take so long to reply? why didnt you send me a service suspended email?

    Also you dropped my other aus vps too and it too could not be started

    vps in question:

    other vps:

  • @corbpie said:
    @HostDoc

    Lets go through it shall we.

    when we answered your ticket within 10 minutes of you opening it

    no you didnt

    Apologies, responded to within 30 minutes then.

    Yes, we killed your instance and for good reason. You are in a shared environment >where users expect fluidity in their use of the service. We had been monitoring your >usage and though we allow spikes in usage as explained to you in the ticket, we do not >allow outright abuse and a single instance using 300% CPU usage with 3 vcores by quite >a margin in my books.

    As i told you in the ticket i use the vps for PHP and MySQL, i got no warnings of mass CPU use and since my servers return i have not seen anything close to 300% usage. If it did infact spike to 300% as you claim (with no graphs) then why no wanring for me to check for a rougue service?

    As mentioned to @jackb this is something we can definately look into improving but I can assure you your instance was resource hogging. We have never had reason to kill your instance before.

    If you manually shut it down why did it take so long to reply? why didnt you send me a service suspended email?

    Because your service was not suspended, we just killed it.
    You used an ISO which stays live on our server to boot with for 4 hours after which you have to load it again, at the time of killing your instance, your ISO had already deleted so when you tried to restart, it could not find your ISO.
    This too was menioned in the ticket and you managed to boot.

    Also you dropped my other aus vps too and it too could not be started

    vps in question:

    other vps:

    We have noticed that instance has been off for a while but no intervention from us caused it.
    You never opened a ticket that we should maybe look into a possible issue with it, so it was assumed you simply switched it off.

    You only have 3 VPS with us, and 2 are up. Just the one that is down that you have never contacted us about. We are here to help. If you do not inform us how can we help?

    Thanked by 2Daniel15 kkrajk
  • If OP is on the same resource pool plan I'm on then I can definitely understand why hostdoc don't take any bs. The VMs are super fast and you can tell that each VM isn't throttled in any way. I also know that HostDoc don't bs around, if he detects abuse it gets shut down, not reniced. It forces the abusing VM's owner to come forward and try to resolve the issue. If one of my VMs ran away I'd be thankful for a killing tbh. I guess we all have different expectations and experience. In my case I'm happy.

    Thanked by 1HostDoc
  • I have no issue with a server being shut down for cpu abuse (intentional abuse or not).

    But you handled it badly.

    You could manually shutdown my server for cpu abuse but give me no notice to that fact.

    Manually shut it down but cant send a quick email or reply to the ticket in a matter of minutes? That would mean this thread wouldnt exist.

    I was in my rights to create a thread here because my VPS was shutdown, i got no warning or notice of this shutdown before or after.

    I dont have anything against you @HostDoc but see it from my side and shutting down a customers VPS without an explaination isnt good.

    Especially when what i am using it for should not see 300% use and doesnt.

    Thanked by 1blackbird
  • @corbpie said:
    I have no issue with a server being shut down for cpu abuse (intentional abuse or not).

    But you handled it badly.

    You could manually shutdown my server for cpu abuse but give me no notice to that fact.

    Manually shut it down but cant send a quick email or reply to the ticket in a matter of minutes? That would mean this thread wouldnt exist.

    I was in my rights to create a thread here because my VPS was shutdown, i got no warning or notice of this shutdown before or after.

    I dont have anything against you @HostDoc but see it from my side and shutting down a customers VPS without an explaination isnt good.

    Especially when what i am using it for should not see 300% use and doesnt.

    Move past the "what it should, shouldn't do" part. The fact is they are protecting their node from abuse, to keep other paying customers happy. Killing the VM and making you contact the host is the perfect way to make sure you're not deliberately being abusive.

    Opening a thread right after having the issue sorted by ticket or even better still, their live chat, is a dick move. Gotta be honest there mate.

    Thanked by 1HostDoc
  • dahartigan said: Opening a thread right after having the issue sorted by ticket or even better still, their live chat, is a dick move. Gotta be honest there mate.

    Please do yourself a favour and read my replies, youre looking silly.

    It took them 37 minutes to respond to my ticket, this thread was opened 19 minutes after i sent the ticket.

    Thanked by 1blackbird
  • jackbjackb Member, Host Rep
    edited January 2019

    @dahartigan said:
    Killing the VM and making you contact the host is the perfect way to make sure you're not deliberately being abusive.

    Killing the VM with no notification is the perfect way to antagonise customers.

    As hostdoc has agreed, they should have made contact and this is something they will improve. Killing vms without notifying is not a way to handle the issue that you should be suggesting is good.

    Thanked by 2corbpie pluush
  • AnthonySmithAnthonySmith Member, Patron Provider
    edited January 2019

    jackb said: I think that's a little on the harsh side. It sounds like you didn't attempt any other host level mitigations (e.g. renice, cgroup throttling, as mentioned below by @eol - cpulimit, reboot the vm etc) before just killing the VM. And you did that without any communication.

    My view is you should take this as an experience to learn from rather than just blaming your customer.

    That is not really a practical option at scale as well you should know.

    If it is a self managed service it is a self monitored one as well, if you cant manage your own service don't be surprised when intervention is quick and decisive.

    On an occasion when it is not impacting the quality of experience for others, sure ping a warning over, when it is impacting other customers then no, I believe triggering a clean shutdown is the right move, it is not a managed service after all.

    Notification after? yes, that would be fair to expect in most cases.

    I have written my own monitoring and action scrips that will auto kill instances that abuse the FUP by 500% of the limits that is the only time any shutdown occurs without notification.

    Will that script loose customers that constantly abuse the bejesus out of the service? yes, is that a problem? not for me no, it is a bonus.

    manually fucking around with cgroups and cpu limits etc is a ridiculous solution when you are dealing with tens of thousands of instances it is just not manageable at scale and you introduce human error over time.

  • dahartigandahartigan Member
    edited January 2019

    @AnthonySmith said:

    jackb said: I think that's a little on the harsh side. It sounds like you didn't attempt any other host level mitigations (e.g. renice, cgroup throttling, as mentioned below by @eol - cpulimit, reboot the vm etc) before just killing the VM. And you did that without any communication.

    My view is you should take this as an experience to learn from rather than just blaming your customer.

    That is not really a practical option at scale as well you should know.

    If it is a self managed service it is a self monitored one as well, if you cant manage your own service don't be surprised when intervention is quick and decisive.

    On an occasion when it is not impacting the quality of experience for others, sure ping a warning over, when it is impacting other customers then no, I believe triggering a clean shutdown is the right move, it is not a managed service after all.

    Notification after? yes, that would be fair to expect in most cases.

    I have written my own monitoring and action scrips that will auto kill instances that abuse the FUP by 500% of the limits that is the only time any shutdown occurs without notification.

    Will that script loose customers that constantly abuse the bejesus out of the service? yes, is that a problem? not for me no, it is a bonus.

    manually fucking around with cgroups and cpu limits etc is a ridiculous solution when you are dealing with tens of thousands of instances it is just not manageable at scale and you introduce human error over time.

    Another no-bs approach to dealing with abusers.

    @jackb said:

    @dahartigan said:
    Killing the VM and making you contact the host is the perfect way to make sure you're not deliberately being abusive.

    Killing the VM with no notification is the perfect way to antagonise customers.

    As hostdoc has agreed, they should have made contact and this is something they will improve. Killing vms without notifying is not a way to handle the issue that you should be suggesting is good.

    Honestly, if a VM runs away and takes down a node, and you allow that, you're going to piss off a lot more than just 1 customer.

  • @corbpie said:
    I have no issue with a server being shut down for cpu abuse (intentional abuse or not).

    But you handled it badly.

    You could manually shutdown my server for cpu abuse but give me no notice to that fact.

    Manually shut it down but cant send a quick email or reply to the ticket in a matter of minutes? That would mean this thread wouldnt exist.

    I was in my rights to create a thread here because my VPS was shutdown, i got no warning or notice of this shutdown before or after.

    I dont have anything against you @HostDoc but see it from my side and shutting down a customers VPS without an explaination isnt good.

    Especially when what i am using it for should not see 300% use and doesnt.

    I agree there @corbpie and we will try and improve that aspect of our service to inform the client promptly. I take that on board.

    You always have the right to open a thread whenever you want, I agree, but had you opened it before the ticket, it would make more sense to me.
    Opening it after opening a ticket has the taste of malice with it.

    Your instance booted and spiked to 170% CPU usage before finally settling and has been settled since.
    We do not target or try to antagonise deliberately, we are simply looking after our environment with swift immediate action. I promise you from today, a notice within 10 minutes of killing the instance will be sent.

  • jackbjackb Member, Host Rep
    edited January 2019

    @dahartigan said:>
    Honestly, if a VM runs away and takes down a node, and you allow that, you're going to piss off a lot more than just 1 customer.

    That isn't really how it works. One VM running away shouldn't take down a node, some customers might see some steal time depending on current utilisation, node spec and etc. Anyway, my main point is that zero contact when killing a VM just serves to piss people off. I can understand some are not wanting to tamper with cgroups as yes, it can be a bit fiddly and if you get a lot of abusers, doesn't scale very well; but there is other options (reboot with notification, shutdown with notification) that don't leave the customer wondering what happened.

    If people want to default to shutting down problematic vm's, fair enough - but they need to at least send notification to the impacted customer. Hostdoc has now committed to doing this which is a move in a good direction.

    Thanked by 1corbpie
This discussion has been closed.