Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Quickweb Drive Failure
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Quickweb Drive Failure

zapluszaplus Member
edited February 2012 in General

How long it will take to replace a raid drive ? one raid drive in my quickweb german vps failed yesterday.. first few hours it was online but very slow.. then they taken it offline to replace the drive.. now they are telling still fsck is going on... more than 24 hours now....

Comments

  • @DotVPS said: Depends what size the disk is

    +1 on that.

    Additionally, some RAID controllers are able to have a "priority" that the array rebuilds at... priority for array rebuild vs concurrent i/o. This will make things slow, too.

  • zapluszaplus Member
    edited February 2012

    @DotVPS and @Damian

    Thank you... anyway i had an rsync copy in another vps.. so i moved my site yesterday itself..

    my current dns ttl is 4 hours.. so took some time to propagate. is it ok if i change to 1 hour ? will it make heavy load to the dns server (bind 512mb xen)... ?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Rebuilds really shouldn't require reboots. This isn't always 100% true as we've had it where a bad drive wasn't thrown out of the array until a complete hard reset had to happen.

    Normally, though, you can just drop a drive in and it rebuilds onto it. The only time I can see it not playing like this is if they're using some odd software RAID setup (AHCI supports

    Francisco

  • marrcomarrco Member
    edited February 2012

    QW germany has really slow disks. I doubt those are raid 10.

    This is what I get on my DE quickweb VPS (384MB OpenVZ):

    ioping -R . --- . (simfs /dev/simfs) ioping statistics --- 249 requests completed in 3020.7 ms, 90 iops, 0.4 mb/s min/avg/max/mdev = 0.1/11.1/308.0/22.6 ms
    and this what I get with the smallest vps I have (openvz from SecureDragon, 96MB OpenVZ) and it’s way faster.

    ioping -R . --- . (simfs /dev/simfs) ioping statistics --- 2600 requests completed in 3000.7 ms, 7038 iops, 27.5 mb/s min/avg/max/mdev = 0.1/0.1/84.1/2.3 ms

    but at least my node is still up. How can i tell if they are using SW or hardware raid, or just and old sata disk?

    (of course before writing here i sent a mail to QW, so they do know about their slow i/o performance, but it's something they consider ok on LEBs)

  • Now two days over... they are telling they are setting up a new vps and trying to recover data from old one.... 48 hours down time.. never expected this from quickweb....

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @zaplus said: Now two days over... they are telling they are setting up a new vps and trying to recover data from old one.... 48 hours down time.. never expected this from quickweb....

    I'll be 100% honest, it doesn't sound like a RAID10.

    It's possible they lost 2 drives in the RAID10, but if they keep saying drive in the singular sense, then i'm gonna o_O at it.

    I could be wrong, I hope i'm wrong.

    Francisco

  • @zaplus, why don't you just ask them to give you a new vps and you reinstall from backups?

    Btw, in the last 48 hrs my ioping stats are getting worse, i hope they are not moving all your customers on the node i'm. Disk latency is already a problem on their german vps, i had to mode a few websites away for this exact reason.

  • finally they provided a new server with restored data from old drive... anyway new server is sandy bridge...

    Thanked by 1djvdorp
Sign In or Register to comment.