Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Delimiter Server down [Atlanta] - Page 4
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Delimiter Server down [Atlanta]

1246722

Comments

  • @Bruce said:

    How long has it been?

    still down!

    http://prntscr.com/jucnc8

    Thanks for using ServerStatus. <3

  • wavecomaswavecomas Member, Host Rep

    Never heard about such issue. We running hundrets on HPE blades for dedi and cloud services. And during last 3 month we upgraded all our servers and just finished upgrades for whole pallet new servers. And we have really different gen7-gen10 servers. If there is such open issue any information is appreciated. However i cannot find nothing from HPE customer advisory

  • wavecomaswavecomas Member, Host Rep

    Never heard about such issue. We running hundrets on HPE blades for dedi and cloud services. And during last 3 month we upgraded all our servers and just finished upgrades for whole pallet new servers. And we have really different gen7-gen10 servers. If there is such open issue any information is appreciated. However i cannot find nothing from HPE customer advisory

  • wavecomaswavecomas Member, Host Rep
    edited June 2018

    They said issue is related to power controller firmware. We just updated full pellet blades gen8 to ver 3.3. Also some gen7 blades. All they had full shutdown and start to diagnostics day later. No issue at all...

    However there was issue when hpe offered wrong firmware to gen8 LOM adapters. After update adapter was not seen in system. But reflashing it helped. Thats related your link

  • BruceBruce Member

    do a small number of upgrades, then check all is OK. dont do 1500 and then find 100+ dont work. they should also have sent out a notice that upgrades were happening, which at the very least would require a reboot.

  • Delimiter has to have some the worst support I've ever experienced from any company. It's completely maddening the lack of support and professionalism they have.

    Thanked by 1corbpie
  • Still down. And now down for 4 days

    I guess that might be the true reason:
    https://www.theregister.co.uk/2017/10/05/hpe_server_firmware_update_bricked_network_adapters/

  • xavconxavcon Member

    @hostingtalking said:
    Still down. And now down for 4 days

    I guess that might be the true reason:
    https://www.theregister.co.uk/2017/10/05/hpe_server_firmware_update_bricked_network_adapters/

    I doubt it, that issue is from October 2017.

  • Well hopefully this is the end

  • I opened up a ticket (per support) to merge/link my account back in January. They closed the ticket and never did link it. So, my "new" portal account has no services linked to it.

    I opened up a ticket anyway, as I am one of the servers down for the count. Not holding my breath, though.

    Definitely cancelling my server once the renewal period runs out.

  • znethzneth Member

    Still down for me. Looks like i'll probably just be going back to SoYouStart when those new prices start.

  • they are down once again after doing another check

  • Rightfully so they're getting a bake over on WHT. If you get your server back up and stick around you have issues

  • the servers that are down are more like g6s and maybe g5s - they are not g7-g10s

  • BruceBruce Member

    @seaeagle said:
    the servers that are down are more like g6s and maybe g5s - they are not g7-g10s

    all mine are G6

  • Most of their clearance servers are 54xx and 55xx so HPE may not even be publicly releasing patches on units that age.

  • wavecomaswavecomas Member, Host Rep
    edited June 2018

    g6 servers and so old history... unsupported for vere very long time. HPE only released meltdown/spectre fixed bios and ilo TLS fixed firmwares. Power management firmware is since 2009 very same version. So im doubting litte bit what they are saying. And I don´t think HPE ever help no one with so old gear..

    And actually price for such server is 10€ each .. Ebay is full of offers. If someone is intrested we have whole pellet of them available :D

    Thanked by 1rdes
  • xavconxavcon Member

    I don't know how a company can suck this much? they've broken records for the most sucking company ever.

  • So... the trace to my server has changed. Like reported on WHT, and Hostballs. It is possible Delimiter did a quick move to Nashville on our servers.

    mtr  -4 --report
    Start: Thu Jun 14 09:46:14 2018
    HOST:         Loss%   Snt   Last   Avg  Best  Wrst StDev
      1.|-- ???                         0.0%    10    0.9   1.1   0.9   1.4   0.0
      2.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
      3.|-- 144.168.41.1               0.0%    10    0.8   6.2   0.7  55.7  17.4
      4.|-- 144.168.32.1               0.0%    10    0.6   2.4   0.5  18.6   5.7
      5.|-- ae32.cr11-dal3.ip4.gtt.ne  0.0%    10    0.6   2.9   0.6   8.9   3.4
      6.|-- et-0-0-67.cr10-dal3.ip4.g  0.0%    10    0.5   1.7   0.5   9.7   2.8
      7.|-- dls-b21-link.telia.net     0.0%    10    0.9   4.9   0.8  24.8   7.4
      8.|-- atl-b22-link.telia.net     0.0%    10   18.5  18.5  18.5  18.6   0.0
      9.|-- nash-b1-link.telia.net.13  0.0%    10   23.6  23.7  23.6  23.8   0.0
     10.|-- yomura-ic-332699-nash-b1.  0.0%    10   29.0  29.0  28.9  29.0   0.0
     11.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
    
    Thanked by 1Falzo
  • FalzoFalzo Member

    AlyssaD said: It is possible Delimiter did a qudick move to Nashville on our servers.

    corrected that for you.

    Thanked by 1PieHasBeenEaten
  • BruceBruce Member

    if delimiter have problem with hardware, it might be quicker/easier to carry disks to another DC than to freight pallets of servers to atlanta. some clarity to WTF is going on would be nice though. yet again, silence today

  • @Bruce said:
    if delimiter have problem with hardware, it might be quicker/easier to carry disks to another DC than to freight pallets of servers to atlanta. some clarity to WTF is going on would be nice though. yet again, silence today

    Problem is, Nashville isn't a delimiter datacenter at the moment. It doesn't exist on there website.

  • For those who can't access the message (or just curious as to what shenanigans they're stating):

    We have stopped our manual efforts to recover existing motherboards, whilst it has helped a few customers that have been affected, the recovery action is not yielding any further tangible results. Today we have recovered 1 system in 30 man hours, the systems that were recoverable we believe are now recovered. The others will need hardware replacement.

    As previously announced we have shipped equipment from New York and Denver and will be move customer's disks to the new hardware. In some cases you will need to use the ILO to change the MAC address setting in your OS but support will be on-hand to take care of this for you if you provide them your administrator/root password.

    When your system is back online, we will contact you to advise you.

    Please do not flood our helpdesk with requests for updates. Some customers has resorted to opening hundreds of tickets over the course of each hour, preventing us from providing timely support to customers not affected by this issue.

  • rdesrdes Member

    It looks like they are not even little sorry for one week downtime... ;)

  • DeftNerdDeftNerd Member
    edited June 2018

    We have stopped our manual efforts to recover existing motherboards, whilst it has helped a few customers that have been affected, the recovery action is not yielding any further tangible results. Today we have recovered 1 system in 30 man hours, the systems that were recoverable we believe are now recovered. The others will need hardware replacement.

    The last message was 3 days prior. 30 man hours in 3 days? Do they just have one employee working on this?

    Their Obj.Space S3-compatible object storage service is also down. I guess it was also run on the same old equipment that had a bad firmware update?

    Sucks for me. I kept my daily backups on there and just replicated weekly backups offsite. Lesson learned... Going to use B2 from now on for backup storage.

    And Blade chassis A01-A05, B01-B05, C01-C05, J11-J15 is a total of 20 chassis. With old hardware, I'm assuming the blades are in a HP C7000 chassis, which means that 320 blade are down.

    I'm still confused on if the individual blades had the firmware problems are are toast or if its the chassis that had a firmware problem and is down. If they just need to replace the chassis, that's relatively simple and you can get replacement C7000 chassis' from any number of business surplus warehouses. They're pretty old commodities and generally inexpensive.

    If its the blades themselves, they're going to have to pull the drives and memory from each blade and pop them into replacement blades in a new chassis. That's a ton of work and if they just have one guy in the DC putting in 10 hour days, it'll take forever.

    If they have to replace the individual blades, they should just get rid of the BL260C (Gen 5) and BL280C (Gen 6) blades and move to BL460C (Gen 6 or above) blades. The market has plenty of used inexpensive BL460's, the power savings would be substantial, it has a newer ILO (The ILO2 firmware upgrade is what caused this mess in the first place) and the upgrade might help go towards making amends for the affected customers.

  • BruceBruce Member

    Today we have recovered 1 system in 30 man hours

    either 3 x 10hrs or 2 x 15hrs, not 1 x 3 days, I assume. still indicates days more downtime though :(

    the outage isnt just affecting blades. all my servers are SL260.

  • @Bruce said:

    Today we have recovered 1 system in 30 man hours

    either 3 x 10hrs or 2 x 15hrs, not 1 x 3 days, I assume. still indicates days more downtime though :(

    the outage isnt just affecting blades. all my servers are SL260.

    Oh, that's very interesting and goes against their narrative. I'm assuming you mean one of the SL160Z systems.

    They've said that this is the result of trying to upgrade the ILO firmware on the older blade servers and something went horribly wrong... but a SL160Z is usually paired with another in a z6000 chassis... and the SL160Z doesn't even have an ILO, they have a "Lights Out 100" option, which has more limited functionality. The LO100 is a traditional IPMI BMC, and does not share hardware or firmware with iLO.

    Picking the wrong ILO firmware wouldn't even matter to a SL160Z because the upgrader wouldn't even find an ILO to update.

    Anyone else notice that they did that huge push last month to lease every last discounted and old server saying they're going to a new business model and won't be doing budget servers anymore?

    Then all the budget servers go offline and routing now goes to Nashville? Is it possible they were padding out their "Low End" customer portfolio so they could sell it off and move them somewhere else? Would anyone even think they could get away with this?

    That's likely just my paranoia flaring up. They wouldn't have stopped their enterprise-focused Obj.Space service with this scenario, and that's been down too.

Sign In or Register to comment.