Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Down - OVH - SBG - Lots and lots of tears. - Page 6
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Down - OVH - SBG - Lots and lots of tears.

13468919

Comments

  • @WebProject said:

    @serv_ee said:

    @WebProject said: Had experience with fire as customer and took 15-20 mins to evacuate the entire store with more than 100 people inside.

    So, servers can walk now? Apples to orages at its best

    Haha, no just according to OVH official news it took them 2 hours and 7 mins to evacuate the building - slow walking tortoises?

    Let see how they do compensate their clients for mess.

    SLA compensation at best I hope. If someone forgot to take their own backups in case this happened...well, good. They'll know better next time.

  • Jason18Jason18 Member
    edited March 2021

    @Falzo said:

    @Jason18 said: No shit Captain obvious.

    how does it help you in any way to know where you service in SBG was after all?

    what decision could you make based upon that? genuinely interested in an answer to that.

    Assuming the data was not critical, it helps you manage your time and IT team project assignment, knowing whether it can be assumed lost or there's a chance for recovery.

    If you know it's 100% lost (SBG2), one could move your IT staff to slowly recreate lost-servers to start "recovery project" which can take XXX manhours. This is money that can be spend on something else. You could or could not do it. No reason to spend money if there's a chance for situation to solve itself.

    If you know that there is chance that it's gonna come back online and the data is not so important, you just move your IT staff to deal with another projects.

    In case where you get information after "few days" that your servers are gone, you might not have spare manhours at later date as your employees might be absorbed with other operations who were planned in advance. For example IT companies borrow their employees to other companies, when they dont have assigned projects. This is done to maximize profits.

    There's no point to recreate non-critical infrastructure if there's a chance it's gonna come back online and company human resources might be optimized in better way.

    Thanked by 1that_guy
  • DataGizmosDataGizmos Member
    edited March 2021

    @WebProject said:

    @serv_ee said:

    @WebProject said: Had experience with fire as customer and took 15-20 mins to evacuate the entire store with more than 100 people inside.

    So, servers can walk now? Apples to orages at its best

    Haha, no just according to OVH official news it took them 2 hours and 7 mins to evacuate the building - slow walking tortoises?

    Let see how they do compensate their clients for mess.

    Read the contract/TOS - I doubt they are liable for anything. The people with backups are moving on and restoring elsewhere to get operational. Those without backups- why would you compensate them or even want them as a customer. Rewarding ignorance would mean the cycle repeats itself

    Thanked by 1that_guy
  • @deank said: Not talking about OVH. They are a bottom barrel host.

    Not sure if..

  • RazzaRazza Member
    edited March 2021

    @Falzo said:

    I doubt they already know the real cause and even if, that they would rush into such maintenance decisions esp. in totally different locations right now...

    The replacement of power cable is not a new things if you look at travaux you see it a number of them pre-fire eg http://travaux.ovh.net/?do=details&id=49044&;

    Thanked by 1boernd
  • @jason18 - assume data unimportant as you say. Then manage time by assuming full loss, no need to recreate and just move on. If important and no backups- talk to your it director about other emerging employment opportunities, if important and have backup - rebuild it elsewhere. Not rocket science. And if corporate schedule can't be modified due to other projects- then IT is to impacted and not built for emerging changes and an all hands on deck corporate disaster meeting is necessary to sort things out.

    If data is production, mission critical and you dont even know where its housed and dont have a disaster plan - you can go on and on about what OVH needs to tell you right now- but the answer is pull out your disaster plan and execute- you do have a plan and a backup for corporate critical data I presume?

    Thanked by 2bulbasaur Falzo
  • deankdeank Member, Troll

    Pretty sure OVH will offer some form of compensations but it will be nothing fancy.

    The fire will likely be classified as "an act of God" unless it's proven otherwise. That will shield them from everything, pretty much.

    Thanked by 2lentro bulbasaur
  • xaocxaoc Member

    @DataGizmos you're not wrong but you're being a c*nt. Keep up the good work. ;)

  • Jason18Jason18 Member
    edited March 2021

    @DataGizmos said:
    @jason18 - assume data unimportant as you say. Then manage time by assuming full loss, no need to recreate and just move on.

    If data is production, mission critical

    Jeez, there's not only black or white.

    Unimportant does not mean it can be lost and forgotten.
    You might have servers that accelerate certain things but without them your service is still fine but a bit slower.

    I'm done discussing this thing. I dont need advice of internet people, never asked for one, just expressed my opinion.

    You can keep defending OVH for not letting people find out their server sublocation post accident.
    If you find no value in information whether your infra is 100% certain destroyed or it might be come back online after period of time, then this is as pseudointelligent as far as it could get.

  • @deank said:
    Pretty sure OVH will offer some form of compensations but it will be nothing fancy.

    The fire will likely be classified as "an act of God" unless it's proven otherwise. That will shield them from everything, pretty much.

    Again, would you like to comment on your "bottom barrel host" stuff? Maybe I'm missing the point in English but looking up the meaning "worst - lowest quality"...ehm. Sure, their support sucks but come the fuck on man, get a grip.

  • deankdeank Member, Troll

    They've cut the support part off completely in order to strength other aspects.

    That is what is being like a bottom barrel host, having to cut corners severely to increase standards in other areas.

    P.S. I have an OVH VPS. (Not affected by the fire tho.)

  • Their support works fine, by telephone. Surely it isn't the best thing for everyone but support is there.

    Thanked by 1kalimov622
  • deankdeank Member, Troll

    Of course, it is there. It has to be, or they can be sued.

    I've used it or tried to, for the past 6 years or so. Might as well just forget it. Having to phone in this era - Well, they are French.

  • FalzoFalzo Member

    @Jason18 said: If you know it's 100% lost (SBG2), one could move your IT staff to slowly recreate lost-servers to start "recovery project" which can take XXX manhours. This is money that can be spend on something else. You could or could not do it. No reason to spend money if there's a chance for situation to solve itself.

    If you know that there is chance that it's gonna come back online and the data is not so important, you just move your IT staff to deal with another projects.

    I agree that the decision about manpower to rebuild something involves costs and time, however this again comes down to the importance of your data. so either it's important and you need to recreate right now anyway because you can't wait and hope.
    or it's not and you could eventually wait for it to 'solve itself' but than it also wouldn't matter if it takes a day or two more to make that decision at all.

    @Jason18 said: In case where you get information after "few days" that your servers are gone, you might not have spare manhours at later date as your employees might be absorbed with other operations who were planned in advance.

    in case of sudden/random catastrophic events something like that is not really plannable at all. look at it the other way around, if your planning would be that narrow you cannot have the needed man-power right now anyway, because you could not have accounted for this all happening 🤷‍♂️

    TL;DR; I still don't really see, how being able to pinpoint the location of your missing vps would help you right now - but that's just me and probably we are doing/planning business and decisions in a different way.

  • TheLinuxBugTheLinuxBug Member
    edited March 2021

    @WebProject said: I personally doubt that OVH has any fire prevention systems or staff at DCs overnight.

    By law this is very likely not the case. To have the amount of power equipment and batteries they retain at each site for power and UPS back-ups, there is no way they didn't have a proper fire suppression system in place that was inspected by the government there -- the real questions are:
    A. Was there an explosion?
    B. Did the systems work as they were supposed to?

    Most likely based on what that building looks like, you had some power related explosion, whether it be from the cooling systems or battery systems, who knows... but something that produced enough heat in such a short period of time as to have overwhelmed their safety systems and literally engulfed the whole building in a very short amount of time. Not even the best fire suppression system in the world can deal with something catastrophic like an explosion -- they are only meant to deal with putting out a fire, not an explosion.

    I am also pretty sure they will have had to have insurance to cover the possibility of such an event as well. So I am sure they will be able to rebuild, however, most likely the customers won't be so lucky (as far as getting their data back). Especially those without backups. However, I don't have any doubts, given enough time, that all servers will be replaced and customers compensated.

    my 2 cents.

    Cheers!

  • deankdeank Member, Troll

    It seems they used water to suppress fire. Data could survive but I wouldn't count on OVH staffs to manually savage in time if they try at all. The server themselves are fried for sure.

    So, yeah, if your server is in SGB2, say sayonara to your data.

    Thanked by 1yoursunny
  • SBG2 - Rack: 73B48.
    Never lucky :/

    I wonder whether they will just refund the server or have a new one allocated somewhere else. I understand data is lost, but I'd like to rebuild :)

  • WebProjectWebProject Host Rep, Veteran

    @deank said:
    It seems they used water to suppress fire. Data could survive but I wouldn't count on OVH staffs to manually savage in time if they try at all. The server themselves are fried for sure.

    So, yeah, if your server is in SGB2, say sayonara to your data.

    I don’t think they will savage the data, as it will take long time to identify each server owner and do hard drive tests.

    A lot of people will learn the lesson of backup of data hard way, not just backup to the second server in the same DC but off-site, off-network, cloud based so replicated more than once.

  • We aren't seeing any complainers come over here though.

  • @deank said: I don't know why the concept is having a backup is so hard to conceive.

    This.

    I mean this kind of disaster doesn't happen every day, but sometimes it's simply impossible to predict such things and no one is really safe from disasters which is why backups are so important. Some people learn it the hard way unfortunately.

    For some reason I always avoided their SGB datacenter, hopefully they can recover fast from this mess.

    Thanked by 1lentro
  • WebProjectWebProject Host Rep, Veteran

    @stevewatson301 said:
    We aren't seeing any complainers come over here though.

    It seems to me all of them are on Twitter

  • deankdeank Member, Troll

    Twitter, AKA the cesspool of humanity.

    Thanked by 2nitro93 that_guy
  • xaocxaoc Member

    @deank said:
    Twitter, AKA the cesspool of humanity.

    I call it "twatter". ;)

    Thanked by 1WebProject
  • WebProjectWebProject Host Rep, Veteran
    edited March 2021

    @serv_ee said:

    @WebProject said:

    @serv_ee said:

    @WebProject said: Had experience with fire as customer and took 15-20 mins to evacuate the entire store with more than 100 people inside.

    So, servers can walk now? Apples to orages at its best

    Haha, no just according to OVH official news it took them 2 hours and 7 mins to evacuate the building - slow walking tortoises?

    Let see how they do compensate their clients for mess.

    SLA compensation at best I hope. If someone forgot to take their own backups in case this happened...well, good. They'll know better next time.

    Poor OVH cloud customers who affected as no compensation or SLA as per OVH terms and conditions, copy of screen shot from Twitter page:
    https://pbs.twimg.com/media/EwGvVqkWQAIPyis?format=jpg&name=900x900

    Thanked by 1Falzo
  • deankdeank Member, Troll

    Well, yeah, technically no customers will be entitled for any form of compensation.

    But I don't think people will care for technicality and will continue to cry. OVH will offer, like, a month or two of free service just to shut them up.

    That's just my guess.

  • @deank said:
    It's the end users I am talking about. If data is supposedly important to you, keep at least a copy somewhere else.

    People in general are lazy, cheap, and bad at assessing (or even being aware of) risk.

    Thanked by 1that_guy
  • @WebProject said:

    @serv_ee said:

    @WebProject said:

    @serv_ee said:

    @WebProject said: Had experience with fire as customer and took 15-20 mins to evacuate the entire store with more than 100 people inside.

    So, servers can walk now? Apples to orages at its best

    Haha, no just according to OVH official news it took them 2 hours and 7 mins to evacuate the building - slow walking tortoises?

    Let see how they do compensate their clients for mess.

    SLA compensation at best I hope. If someone forgot to take their own backups in case this happened...well, good. They'll know better next time.

    Poor OVH cloud customers who affected as no compensation or SLA as per OVH terms and conditions, copy of screen shot from Twitter page:
    https://pbs.twimg.com/media/EwGvVqkWQAIPyis?format=jpg&name=900x900

    Let's not all make out that OVH are the bad guys here with these kinds of posts - The vast majority of providers SLAs protect them from viability due to this kind of incident, including your own.

  • lentrolentro Member, Host Rep

    OVH must be shitting themselves right now.

    a) bad press
    b) the loss of all those servers. If that was 50,000 servers destroyed with each worth $200, that's a $10m write off
    c) cost to rebuild the DC, buy new servers (like another $25m)

    (they are trying to IPO. Guess a good fire suppression system would have costed a fraction of that!)

  • deankdeank Member, Troll

    Well, how did the fire start anyway?

    Did it begin within SBG2 or elsewhere?

  • pikepike Veteran
    edited March 2021

    @serv_ee said:
    @Hetzner_OL In light of this can you maybe shed some light what measures you have in place for such a thing?

Sign In or Register to comment.