New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
SLA compensation at best I hope. If someone forgot to take their own backups in case this happened...well, good. They'll know better next time.
Assuming the data was not critical, it helps you manage your time and IT team project assignment, knowing whether it can be assumed lost or there's a chance for recovery.
If you know it's 100% lost (SBG2), one could move your IT staff to slowly recreate lost-servers to start "recovery project" which can take XXX manhours. This is money that can be spend on something else. You could or could not do it. No reason to spend money if there's a chance for situation to solve itself.
If you know that there is chance that it's gonna come back online and the data is not so important, you just move your IT staff to deal with another projects.
In case where you get information after "few days" that your servers are gone, you might not have spare manhours at later date as your employees might be absorbed with other operations who were planned in advance. For example IT companies borrow their employees to other companies, when they dont have assigned projects. This is done to maximize profits.
There's no point to recreate non-critical infrastructure if there's a chance it's gonna come back online and company human resources might be optimized in better way.
Read the contract/TOS - I doubt they are liable for anything. The people with backups are moving on and restoring elsewhere to get operational. Those without backups- why would you compensate them or even want them as a customer. Rewarding ignorance would mean the cycle repeats itself
Not sure if..
The replacement of power cable is not a new things if you look at travaux you see it a number of them pre-fire eg http://travaux.ovh.net/?do=details&id=49044&
@jason18 - assume data unimportant as you say. Then manage time by assuming full loss, no need to recreate and just move on. If important and no backups- talk to your it director about other emerging employment opportunities, if important and have backup - rebuild it elsewhere. Not rocket science. And if corporate schedule can't be modified due to other projects- then IT is to impacted and not built for emerging changes and an all hands on deck corporate disaster meeting is necessary to sort things out.
If data is production, mission critical and you dont even know where its housed and dont have a disaster plan - you can go on and on about what OVH needs to tell you right now- but the answer is pull out your disaster plan and execute- you do have a plan and a backup for corporate critical data I presume?
Pretty sure OVH will offer some form of compensations but it will be nothing fancy.
The fire will likely be classified as "an act of God" unless it's proven otherwise. That will shield them from everything, pretty much.
@DataGizmos you're not wrong but you're being a c*nt. Keep up the good work.
Jeez, there's not only black or white.
Unimportant does not mean it can be lost and forgotten.
You might have servers that accelerate certain things but without them your service is still fine but a bit slower.
I'm done discussing this thing. I dont need advice of internet people, never asked for one, just expressed my opinion.
You can keep defending OVH for not letting people find out their server sublocation post accident.
If you find no value in information whether your infra is 100% certain destroyed or it might be come back online after period of time, then this is as pseudointelligent as far as it could get.
Again, would you like to comment on your "bottom barrel host" stuff? Maybe I'm missing the point in English but looking up the meaning "worst - lowest quality"...ehm. Sure, their support sucks but come the fuck on man, get a grip.
They've cut the support part off completely in order to strength other aspects.
That is what is being like a bottom barrel host, having to cut corners severely to increase standards in other areas.
P.S. I have an OVH VPS. (Not affected by the fire tho.)
Their support works fine, by telephone. Surely it isn't the best thing for everyone but support is there.
Of course, it is there. It has to be, or they can be sued.
I've used it or tried to, for the past 6 years or so. Might as well just forget it. Having to phone in this era - Well, they are French.
I agree that the decision about manpower to rebuild something involves costs and time, however this again comes down to the importance of your data. so either it's important and you need to recreate right now anyway because you can't wait and hope.
or it's not and you could eventually wait for it to 'solve itself' but than it also wouldn't matter if it takes a day or two more to make that decision at all.
in case of sudden/random catastrophic events something like that is not really plannable at all. look at it the other way around, if your planning would be that narrow you cannot have the needed man-power right now anyway, because you could not have accounted for this all happening 🤷♂️
TL;DR; I still don't really see, how being able to pinpoint the location of your missing vps would help you right now - but that's just me and probably we are doing/planning business and decisions in a different way.
By law this is very likely not the case. To have the amount of power equipment and batteries they retain at each site for power and UPS back-ups, there is no way they didn't have a proper fire suppression system in place that was inspected by the government there -- the real questions are:
A. Was there an explosion?
B. Did the systems work as they were supposed to?
Most likely based on what that building looks like, you had some power related explosion, whether it be from the cooling systems or battery systems, who knows... but something that produced enough heat in such a short period of time as to have overwhelmed their safety systems and literally engulfed the whole building in a very short amount of time. Not even the best fire suppression system in the world can deal with something catastrophic like an explosion -- they are only meant to deal with putting out a fire, not an explosion.
I am also pretty sure they will have had to have insurance to cover the possibility of such an event as well. So I am sure they will be able to rebuild, however, most likely the customers won't be so lucky (as far as getting their data back). Especially those without backups. However, I don't have any doubts, given enough time, that all servers will be replaced and customers compensated.
my 2 cents.
Cheers!
It seems they used water to suppress fire. Data could survive but I wouldn't count on OVH staffs to manually savage in time if they try at all. The server themselves are fried for sure.
So, yeah, if your server is in SGB2, say sayonara to your data.
SBG2 - Rack: 73B48.
Never lucky
I wonder whether they will just refund the server or have a new one allocated somewhere else. I understand data is lost, but I'd like to rebuild
I don’t think they will savage the data, as it will take long time to identify each server owner and do hard drive tests.
A lot of people will learn the lesson of backup of data hard way, not just backup to the second server in the same DC but off-site, off-network, cloud based so replicated more than once.
We aren't seeing any complainers come over here though.
This.
I mean this kind of disaster doesn't happen every day, but sometimes it's simply impossible to predict such things and no one is really safe from disasters which is why backups are so important. Some people learn it the hard way unfortunately.
For some reason I always avoided their SGB datacenter, hopefully they can recover fast from this mess.
It seems to me all of them are on Twitter
Twitter, AKA the cesspool of humanity.
I call it "twatter".
Poor OVH cloud customers who affected as no compensation or SLA as per OVH terms and conditions, copy of screen shot from Twitter page:
https://pbs.twimg.com/media/EwGvVqkWQAIPyis?format=jpg&name=900x900
Well, yeah, technically no customers will be entitled for any form of compensation.
But I don't think people will care for technicality and will continue to cry. OVH will offer, like, a month or two of free service just to shut them up.
That's just my guess.
People in general are lazy, cheap, and bad at assessing (or even being aware of) risk.
Let's not all make out that OVH are the bad guys here with these kinds of posts - The vast majority of providers SLAs protect them from viability due to this kind of incident, including your own.
OVH must be shitting themselves right now.
a) bad press
b) the loss of all those servers. If that was 50,000 servers destroyed with each worth $200, that's a $10m write off
c) cost to rebuild the DC, buy new servers (like another $25m)
(they are trying to IPO. Guess a good fire suppression system would have costed a fraction of that!)
Well, how did the fire start anyway?
Did it begin within SBG2 or elsewhere?