Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


48 hours of service disruptions [ceph storage zxhost]
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

48 hours of service disruptions [ceph storage zxhost]

ramonwapramonwap Member
edited May 2017 in Outages

I get email from zxhost.


Hello ***


We continue to recieve attacks directly towards are Proxmox enviornment, these attacks are only effecting our Proxmox cluster services.


Services hosted on our Virtualizor KVM + Hetzner OpenVZ node's are not effected and continue to operate, due to the ongoing incovience we can offer the following.


1/ Setup a fresh KVM VM on our Virtualizor enviornment and help support your data transfer across to the new VM (Non IP Change)


or


2/ Setup a fresh OpenVZ VM on our Hetzner Virtualizor servers and help support your data transfer across to the new VM (IP Change) - Limited Stock


Both will come with a 1 month extension on your current renewal date.


Please let us know which option you require and the OS you would like installed on your new VM.


Again we appologise for the past 48 hours of service disruptions

Thanks,
ZXHost

Comments

  • yomeroyomero Member

    So, waiting to cease the attacks is not an option? A little bit weird

  • Maybe they don't know how to handle / mitigate the attacks?

  • Hi @yomero :)

    How are you?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    At least they got your data! Could've gone much much worse.

    Francisco

  • yomeroyomero Member
    edited May 2017

    @ErawanArifNugroho said:
    Hi @yomero :)

    How are you?

    Hello! Fine and you? I keep lurking everyday this place ;) despite not posting a lot like before

    Thanked by 1ErawanArifNugroho
  • trewqtrewq Administrator, Patron Provider

    @KeKe said:
    Maybe they don't know how to handle / mitigate the attacks?

    Or maybe it's not worth the money for them on razor thin margins.

  • This has only been effecting our Proxmox infra, we have sent emails to all effected clients to give them to the option to migrate to standard KVM.

    Thanked by 2Falzo ramonwap
  • FalzoFalzo Member

    @AshleyUk wish you best of luck for getting through with this. sad to see working with proxmox didn't work out at all :(

    Thanked by 1Ympker
  • It seems that they can control the issue now as per recent emails. Hope it will be back to normal soon :-)

    Thanked by 1AshleyUk
  • @AshleyUk said:
    This has only been effecting our Proxmox infra, we have sent emails to all effected clients to give them to the option to migrate to standard KVM.

    so are you discontinuing the product ? No more ceph based replicated storage ?

  • FalzoFalzo Member

    @Junkless said:

    @AshleyUk said:
    This has only been effecting our Proxmox infra, we have sent emails to all effected clients to give them to the option to migrate to standard KVM.

    so are you discontinuing the product ? No more ceph based replicated storage ?

    for the moment it seems he was able to fix the issues against all odds and so it seems reasonable enough to not rush into migrations if they are not needed anymore. so if it stays this way, we can keep proxmox+ceph :-)

  • WilliamWilliam Member

    well, not owning hardware + not owning IPs + no DDOS protection on host = issues...

  • @Falzo said:

    for the moment it seems he was able to fix the issues against all odds and so it seems reasonable enough to not rush into migrations if they are not needed anymore. so if it stays this way, we can keep proxmox+ceph :-)

    That is exactly what I wanted to hear :P not using it at the moment, so I can wait till it gets resolved.

    Thanked by 1Falzo
  • williewillie Member

    I've been able to reach mine over the past few days with no issues. Must be an intermittent thing. What dipwad is attacking that cluster anyway?

  • FalzoFalzo Member

    @willie said:
    I've been able to reach mine over the past few days with no issues. Must be an intermittent thing. What dipwad is attacking that cluster anyway?

    seems like the nodes had been impacted differently, I have some stats available of course ;-)

    this one was less affected also more during the night (CET):

    2017-05-17 06:25:14        Device rebooted: after 1h 28m 47s
    2017-05-17 06:25:14     Device status changed to Up
    2017-05-17 06:15:14     Device status changed to Down (ping)
    2017-05-17 04:45:13     Device rebooted: after 3h 42m 56s
    2017-05-17 04:45:13     Device status changed to Up
    2017-05-17 04:35:16     Device status changed to Down (ping)
    2017-05-17 00:50:19     Device rebooted: after 4h 29m 27s
    2017-05-17 00:50:18     Device status changed to Up
    2017-05-16 23:45:21     Device status changed to Down (ping)
    2017-05-16 19:00:20     Device rebooted: after 10h 45m 11s
    2017-05-16 19:00:20     Device status changed to Up
    2017-05-16 18:50:19     Device status changed to Down (ping)
    2017-05-16 08:05:19     Device rebooted: after 5h 51m 33s
    2017-05-16 08:05:18     Device status changed to Up
    2017-05-16 07:55:20     Device status changed to Down (ping)
    2017-05-16 02:00:16     Device rebooted: after 17 days, 14h 9m 56s
    2017-05-16 02:00:15     Device status changed to Up
    2017-05-16 01:20:16     Device status changed to Down (ping)
    

    another one had more and longer downtimes:

    2017-05-17 12:10:19        Device rebooted: after 3h 9m 2s
    2017-05-17 12:10:19     Device status changed to Up
    2017-05-17 11:55:21     Device status changed to Down (ping)
    2017-05-17 08:45:20     Device rebooted: after 2h 20m 23s
    2017-05-17 08:45:20     Device status changed to Up
    2017-05-17 05:45:21     Device status changed to Down (ping)
    2017-05-17 03:25:19     Device rebooted: after 2h 28m 54s
    2017-05-17 03:25:19     Device status changed to Up
    2017-05-17 03:15:18     Device status changed to Down (ping)
    2017-05-17 00:45:21     Device rebooted: after 4h 59m 19s
    2017-05-17 00:45:20     Device status changed to Up
    2017-05-16 23:50:22     Device status changed to Down (ping)
    2017-05-16 18:35:16     Device rebooted: after 9h 31m 52s
    2017-05-16 18:35:16     Device status changed to Up
    2017-05-16 18:25:21     Device status changed to Down (ping)
    2017-05-16 08:50:15     Device rebooted: after 3h 3m 8s
    2017-05-16 08:50:15     Device status changed to Up
    2017-05-16 08:35:17     Device status changed to Down (ping)
    2017-05-16 05:30:15     Device rebooted: after 2h 29m 17s
    2017-05-16 05:30:15     Device status changed to Up
    2017-05-16 05:10:18     Device status changed to Down (ping)
    2017-05-16 02:40:16     Device rebooted: after 11 days, 20h 15m 35s
    2017-05-16 02:40:15     Device status changed to Up
    2017-05-16 02:30:17     Device status changed to Down (ping)
    

    glad to see everything is running stable again for now ;-)

  • YuraYura Member
    edited May 2017

    I really don't want to migrate to standard KVM or (gasps) OpenVZ as mentioned in the email. The whole CEPH storage type is the exact reason I jumped in for 3 years. Otherwise, the offer is entirely different. I really wish Ashley will overcome difficulties and all will be fine. Godspeed.

    Thanked by 1Junkless
  • @Yura said:
    I really don't want to migrate to standard KVM or (gasps) OpenVZ as mentioned in the email. The whole CEPH storage type is the exact reason I jumped in for 3 years. Otherwise, the offer is entirely different. I really wish Ashley will overcome difficulties and all will be fine. Godspeed.

    This.

  • I asked Ashley this earlier and said it was only an option to migrate. My understanding (and correct me if I am wrong @AshleyUK) is that you can remain on the existing platform.

  • emptyPDemptyPD Member

    ¿ can someone please explain me the advantages of ceph storage over RAID 10 ?

  • seanhoseanho Member

    @emptyPD said:
    ¿ can someone please explain me the advantages of ceph storage over RAID 10 ?

    I believe their Virtualizor nodes are hardware RAID 50. I'm curious too about the pros/cons; their Ceph array was quite decent for cold storage, but I can't see how local RAID50 would be worse.

    I'm rooting for them, though; I'm confident they'll find a solution.

  • AshleyUkAshleyUk Member
    edited May 2017

    @michaels said:
    I asked Ashley this earlier and said it was only an option to migrate. My understanding (and correct me if I am wrong @AshleyUK) is that you can remain on the existing platform.

    Correct, the environment has been stable overnight and we continue to monitor.

  • emptyPDemptyPD Member

    yeah> @AshleyUk said:

    Correct, the environment has been stable overnight and we continue to monitor.

    yeah, i can confirm, all is stable now, thanks for the excelent support @AshleyUk

    Thanked by 1Falzo
  • WilliamWilliam Member
    edited May 2017

    seanho said: I'm curious too about the pros/cons; their Ceph array was quite decent for cold storage, but I can't see how local RAID50 would be worse.

    If you can configure it and have a decent amount of nodes - this does not mean 4 nodes with a bunch of HDDs - with decent interconnect - 10GE at least, better Infiniband/40GE - CEPH is very efficient in spreading out data and redundancy.

    If you have a 4 nodes and 1GE between them... it is pointless, entirely, especially once you actually loose an entire node and need to rebuild/add another one.

    If it breaks it is hard to repair which is why i would not trust a random ISP with it, you'll possibly end up like first editions of OnApp storage.

    Thanked by 1seanho
Sign In or Register to comment.