New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Thanks for using ServerStatus.
Never heard about such issue. We running hundrets on HPE blades for dedi and cloud services. And during last 3 month we upgraded all our servers and just finished upgrades for whole pallet new servers. And we have really different gen7-gen10 servers. If there is such open issue any information is appreciated. However i cannot find nothing from HPE customer advisory
Never heard about such issue. We running hundrets on HPE blades for dedi and cloud services. And during last 3 month we upgraded all our servers and just finished upgrades for whole pallet new servers. And we have really different gen7-gen10 servers. If there is such open issue any information is appreciated. However i cannot find nothing from HPE customer advisory
https://www.google.com/amp/s/www.theregister.co.uk/AMP/2017/10/05/hpe_server_firmware_update_bricked_network_adapters/
https://community.hpe.com/t5/ProLiant-Servers-ML-DL-SL/How-to-update-DL360p-G8-BIOS-via-iLO/td-p/6964357
They said issue is related to power controller firmware. We just updated full pellet blades gen8 to ver 3.3. Also some gen7 blades. All they had full shutdown and start to diagnostics day later. No issue at all...
However there was issue when hpe offered wrong firmware to gen8 LOM adapters. After update adapter was not seen in system. But reflashing it helped. Thats related your link
do a small number of upgrades, then check all is OK. dont do 1500 and then find 100+ dont work. they should also have sent out a notice that upgrades were happening, which at the very least would require a reboot.
Delimiter has to have some the worst support I've ever experienced from any company. It's completely maddening the lack of support and professionalism they have.
Still down. And now down for 4 days
I guess that might be the true reason:
https://www.theregister.co.uk/2017/10/05/hpe_server_firmware_update_bricked_network_adapters/
I doubt it, that issue is from October 2017.
Well hopefully this is the end
I opened up a ticket (per support) to merge/link my account back in January. They closed the ticket and never did link it. So, my "new" portal account has no services linked to it.
I opened up a ticket anyway, as I am one of the servers down for the count. Not holding my breath, though.
Definitely cancelling my server once the renewal period runs out.
Still down for me. Looks like i'll probably just be going back to SoYouStart when those new prices start.
they are down once again after doing another check
Rightfully so they're getting a bake over on WHT. If you get your server back up and stick around you have issues
the servers that are down are more like g6s and maybe g5s - they are not g7-g10s
all mine are G6
Most of their clearance servers are 54xx and 55xx so HPE may not even be publicly releasing patches on units that age.
g6 servers and so old history... unsupported for vere very long time. HPE only released meltdown/spectre fixed bios and ilo TLS fixed firmwares. Power management firmware is since 2009 very same version. So im doubting litte bit what they are saying. And I don´t think HPE ever help no one with so old gear..
And actually price for such server is 10€ each .. Ebay is full of offers. If someone is intrested we have whole pellet of them available
I don't know how a company can suck this much? they've broken records for the most sucking company ever.
So... the trace to my server has changed. Like reported on WHT, and Hostballs. It is possible Delimiter did a quick move to Nashville on our servers.
corrected that for you.
if delimiter have problem with hardware, it might be quicker/easier to carry disks to another DC than to freight pallets of servers to atlanta. some clarity to WTF is going on would be nice though. yet again, silence today
Problem is, Nashville isn't a delimiter datacenter at the moment. It doesn't exist on there website.
they released an update
https://cc.delimiter.com/news/view/2/update-----atlanta---blades-in-a01-05--b01-05--c01-05--j11-j15/
For those who can't access the message (or just curious as to what shenanigans they're stating):
It looks like they are not even little sorry for one week downtime...
The last message was 3 days prior. 30 man hours in 3 days? Do they just have one employee working on this?
Their Obj.Space S3-compatible object storage service is also down. I guess it was also run on the same old equipment that had a bad firmware update?
Sucks for me. I kept my daily backups on there and just replicated weekly backups offsite. Lesson learned... Going to use B2 from now on for backup storage.
And Blade chassis A01-A05, B01-B05, C01-C05, J11-J15 is a total of 20 chassis. With old hardware, I'm assuming the blades are in a HP C7000 chassis, which means that 320 blade are down.
I'm still confused on if the individual blades had the firmware problems are are toast or if its the chassis that had a firmware problem and is down. If they just need to replace the chassis, that's relatively simple and you can get replacement C7000 chassis' from any number of business surplus warehouses. They're pretty old commodities and generally inexpensive.
If its the blades themselves, they're going to have to pull the drives and memory from each blade and pop them into replacement blades in a new chassis. That's a ton of work and if they just have one guy in the DC putting in 10 hour days, it'll take forever.
If they have to replace the individual blades, they should just get rid of the BL260C (Gen 5) and BL280C (Gen 6) blades and move to BL460C (Gen 6 or above) blades. The market has plenty of used inexpensive BL460's, the power savings would be substantial, it has a newer ILO (The ILO2 firmware upgrade is what caused this mess in the first place) and the upgrade might help go towards making amends for the affected customers.
either 3 x 10hrs or 2 x 15hrs, not 1 x 3 days, I assume. still indicates days more downtime though
the outage isnt just affecting blades. all my servers are SL260.
Oh, that's very interesting and goes against their narrative. I'm assuming you mean one of the SL160Z systems.
They've said that this is the result of trying to upgrade the ILO firmware on the older blade servers and something went horribly wrong... but a SL160Z is usually paired with another in a z6000 chassis... and the SL160Z doesn't even have an ILO, they have a "Lights Out 100" option, which has more limited functionality. The LO100 is a traditional IPMI BMC, and does not share hardware or firmware with iLO.
Picking the wrong ILO firmware wouldn't even matter to a SL160Z because the upgrader wouldn't even find an ILO to update.
Anyone else notice that they did that huge push last month to lease every last discounted and old server saying they're going to a new business model and won't be doing budget servers anymore?
Then all the budget servers go offline and routing now goes to Nashville? Is it possible they were padding out their "Low End" customer portfolio so they could sell it off and move them somewhere else? Would anyone even think they could get away with this?
That's likely just my paranoia flaring up. They wouldn't have stopped their enterprise-focused Obj.Space service with this scenario, and that's been down too.