Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Ramnode down?

Ramnode down?

sonicsonic Member
edited March 2013 in General

It's not just you! http://ramnode.com looks down from here.

My vps, ramnode site, client area is down.

P/S: So fast, everything is up now :) (~20 mins downtime)

Comments

  • HalfEatenPieHalfEatenPie Member
    edited March 2013

    RamNode.com is up for me. So is their client area. And their SolusVM Login page.

    Try checking from a VPN?

    Catalyst Host - Pie Approved!
  • perennateperennate Member
    edited March 2013

    Yes, they posted on Twitter but for some reason they never send emails (until later): https://twitter.com/ramnode

  • This page (http://ramnode.com/) is currently offline. However, because the site uses CloudFlare's Always Online™ technology you can continue to surf a snapshot of the site. We will keep checking in the background and, as soon as the site comes back, you will automatically be served the live version. Always Online™ is powered by CloudFlare | Hide this Alert

  • perennateperennate Member
    edited March 2013

    The website seems to have just come back online, at least for me. They said they were switching to nLayer so maybe it takes some time.

    VPS still down.

    Edit@one-minute-later: coming back online now, ~20 minutes downtime.

  • Oh. Yeah that makes sense then.

    Catalyst Host - Pie Approved!
  • sonicsonic Member

    So fast, everything is up now :)

  • Nevermind, looks like some areas are still unable to connect. Maybe something about the carrier switch.

  • nLayer config was missing something, so the failover never happened correctly. Normally it would not have gone like that at all. We're still investigating.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • Also, I think 13 minutes was the most I see on any of our 1-min interval monitoring. Most were 4-5 minutes. Not good either way, but that needed to be clarified.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • Check your upstream holddown timers in bgp, mine is 3 minutes before it will let go and then traffic starts coming in the other way.

  • At the new E5 node there are some I/O issues. Is related to the array rebuild?

  • This might be a helpful page for op in the future: http://status.ramnode.com/

  • shovenoseshovenose Member
    edited March 2013

    It's reassuring that Nick_A is a trustworth, well-regarded host so we know it's not a deadpool. Thanks for keeping everyone, even not customers like me, so well updated! @Nick_A it's communication during crisis, not presence or lack of presence of crisis, that really determines how good a host is.

  • @shovenose said: Everything is down :(

    Nothing is down as of my previous posts. That status page is every 5 minutes because of Pingdom false positives.

    @yomero said: At the new E5 node there are some I/O issues. Is related to the array rebuild?

    Yes. I was working on it when all this network stuff happened.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • @Nick_A Ninja'd

  • Some ISPs are having trouble getting in over nLayer. Working on it.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • Good luck @Nick_A.

    Catalyst Host - Pie Approved!
  • Seems to have cleared up now. Please open a ticket if not.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • An announcement regarding the network drop has been posted in the Client Area. Thanks for your patience and continued support.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • JarJar Member

    I was wondering why munin texted me about 5% disk usage in Lenoir :P

  • NeoNeo Banned

    I got an outage but only for 12 Minutes but 60Days uptime so i dont care ;D

  • perennateperennate Member
    edited March 2013

    @Nick_A said: Also, I think 13 minutes was the most I see on any of our 1-min interval monitoring. Most were 4-5 minutes. Not good either way, but that needed to be clarified.

    It was fluctuating on and off for five minutes (first issues at 12:46 am EDT) before it went offline for thirteen minutes (at 12:51 am EDT). So the total downtime then is something like eighteen minutes. Thirteen minutes may be the longest interval that it was down, but really it was having issues for eighteen minutes (~20), which I said downtime because it's useless for me if, for example, it goes down for twenty seconds even if only every hour.

  • Hi Nick,

    Is ATLCVZE5-1 still down? I can't seem to power up my VPS. Hope another disk didn't crash while the raid array was still rebuilding.

  • yomeroyomero Member
    edited March 2013

    @wshyang said: Is ATLCVZE5-1 still down?

    I don't know, but for me, my VPS got moved to an E3 node. The server didn't "autobooted" (LOL) and I did it manually =/ Working fine. Just the node seems a little bit crowded, but maybe this will be in the meanwhile until Nick recovers the E5 node.

  • CoreyCorey Member

    @wshyang said: Hi Nick,

    Is ATLCVZE5-1 still down? I can't seem to power up my VPS. Hope another disk didn't crash while the raid array was still rebuilding.

    Those types of things always seem to happen at the worst time :(

    BitAccel - OpenVZ VPS / IRC,VPN,Anything Legal & Unrivaled Support!
  • Noticed this too, seems to have stabilised however. Started about 12 hours ago.

    Systems Administrator | IWFHosting

    Comments expressed are solely my own opinion and not of that of the companies, unless stated.

  • Yeah ATLCVZE5-1 died overnight. I am manually restoring every VPS that was backed up yesterday (almost all of them) to other nodes right now.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • @Nick_A a bit off topic but do you know of any issues with ping lately. I've got a monitoring agent on my VPS and it times out all the time. This is a VPS in Las Vegas but I'ave had issues with Dallas and NY lately too. (5min ints so blank spots are where nothing was gather due to not being able to reach)

    http://share.talkingtontech.com/2013-03-20_1603.png

  • If you'll please open a support ticket, I can assist you. I can't do much from here.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • @Nick_A yep, figured as much just wanted to see if any known issues as its pretty frequent lately. ticket created.

  • @Nick_A We're running the Solus panel from yourselves, can you tell me the point of data loss? So I can manually recreate entries into Solus for clients etc.

    Systems Administrator | IWFHosting

    Comments expressed are solely my own opinion and not of that of the companies, unless stated.

  • @eastonch said: We're running the Solus panel from yourselves, can you tell me the point of data loss? So I can manually recreate entries into Solus for clients etc.

    I believe it was around 6 AM EDT.

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
  • NoxterNoxter Member
    edited March 2013

    The 2 nodes I am on had 231s of downtime, pinged every 5 seconds.

    The real downtime was minimal... Pingdom is not a good factor of judging real uptime/downtime. It was really ~4 minutes.

  • @Noxter said: The real downtime was minimal... Pingdom is not a good factor of judging real uptime/downtime. It was really ~4 minutes.

    Depends on which node you're on.

  • NoxterNoxter Member
    edited March 2013

    @perennate said: Depends on which node you're on.

    I assume we're talking about the network dropping out and nLayer not kicking over? How does that depend which node you're on?

  • MaouniqueMaounique Member
    edited March 2013

    He means some VMs were on a node that died and experience extended downtime while they are moved to a new home, therefore downtime depends on node too, tho it is unrelated to the network issue.

    Who's General Failure, and why is he reading my drive A: ?

  • Right. I checked their Twitter, wasn't aware of the failed array earlier today. I was speaking about the network drop, few people made it seem it was longer than it really was.

  • perennateperennate Member
    edited March 2013

    @Noxter said: Right. I checked their Twitter, wasn't aware of the failed array earlier today. I was speaking about the network drop, few people made it seem it was longer than it really was.

    All I can say is that all of my RamNode VPS were having network issues for five minutes and then were down for thirteen minutes. I don't know what happened with the node(s) your VPS is/are on. So that's eighteen minutes in total. Not making anything seem longer than it really was, maybe you just didn't have the same issue; not sure why you're trying to say that my VPS was inaccessible for four minutes when I know it was offline for more than that.

    This is based on both being SSH'd into the VPS at the time (so during the first five minutes, freezes of thirty seconds or so, and then it went down for thirteen), looking at connections disconnecting from the running services at the same time as the SSH problem (means it wasn't just me), and also uptime reports from one of the VPS showing that it was unable to ping several (every) external locations, with on and off for the first five minutes and then continued through the thirteen minutes.

    (Not saying that this is bad or anything, this is pretty rare for RamNode. Just saying that it was >eighteen minutes, not four/thirteen; and irritating that "few people" are claiming this is incorrect without full information.)

    @Noxter said: Pingdom is not a good factor of judging real uptime/downtime. It was really ~4 minutes.

    Irrelevant, eighteen minutes wasn't based on Pingdom.

    @Noxter said: How does that depend which node you're on?

    How am I supposed to know?

    Edit: oh, and I submitted a ticket at 0:46 and the service didn't come back until 1:03 approximately (seventeen minutes). So unless you're saying I predicted the service disruptions...

  • NoxterNoxter Member
    edited March 2013

    Negative.

    I have a 2 VPSs with Nick each on different nodes. Both display the same amount of downtime... and I wonder why... a main network issue in a rack is going to affect the rack as a whole.

    All times below are EDT and in the AM: 12:44:42 - 12:45:37 (timeout) 12:46:03 - 12:46:22 (timeout) 12:48:03 - 12:48:08 (timeout) 12:48:22 - 12:48:28 (timeout) 12:49:37 - 12:49:44 (timeout) 12:50:07 - 12:51:12 (timeout) 12:58:38 - 01:00:47 (timeout)

    There you go, down to the seconds. About 4 minutes of real downtime.

  • Another thread where a user reports downtime on a forum rather than to the host's support, meh.

    Relax brah
  • perennateperennate Member
    edited March 2013

    That's sixteen minutes of network issues. Anyway, I already stated, I had eighteen minutes of network issues, I don't know why you had less downtime but I have three different sources of information (SSH, uptime tracker, services running on the VPS itself) that say the same thing.

    Edit: saying that on and off for sixteen minutes is only four minutes of downtime is ignoring the fact that some services require continuous connectivity. I already said that I was considering the entire period of network issues as downtime.

    @perennate said: which I said downtime because it's useless for me if, for example, it goes down for twenty seconds even if only every hour.

    There's no such thing as "real downtime". It depends on what you're running. Anyway your measurement of downtime is completely useless.

    So, like I said, there was approximately twenty minutes of downtime.

    Edit2: oh, and Wikipedia:

    Downtime or outage duration refers to a period of time that a system fails to provide or perform its primary function.

    My VPS failed to "provide its primary function" for the entire eighteen minutes, not just the four to six minutes when it was inaccessible. (Not to mention that, I assume because of the switch over, some locations still couldn't connect for another ten minutes+ after).

  • Lol... what compiled you to bold your statement? I'm curious.

  • perennateperennate Member
    edited March 2013

    I didn't compile anything, I just used Markdown ;)

    Edit: I don't think I was compiled either.

  • Err, compelled*

    Fact of the matter is it was 16 minutes of network instability/packetloss, 4 minutes of real downtime. Don't piss on Nick's company because you think 16 minutes of network inconsistency was downtime.

  • perennateperennate Member
    edited March 2013

    a) never pissed on RamNode b) you clearly still have misconception about definition of downtime.

  • Can someone close this thread now? :) Network has been stabilized for a while, and the RAID problem is unrelated and only affects 30 people (all of whom have been restored or given new VPSs).

    RamNode: High Performance SSD and SSD-Cached VPS
    New York - Atlanta - Seattle - Netherlands - IPv6 - DDoS Protection - AS3842
Sign In or Register to comment.