New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
sorry, as long as you see that 'Could not create PVE2_API object' message in the client panel for your server, there is also no way anymore to access the proxmox GUI itself cause of the missing session...
That is so bad. I am also having the same issue. Any updates further?? How to get this resolved. I have opened a ticket but sure that will help.
last statement has been from yesterday around noon, see @Mr_Tom comment from above:
I doubt there is much to see soon as long as they are working on it, because of the huge amount of data anything tried will take time and probably you see about it really helping or not only afterwards.
tickets won't help or bring you sooner to any result, the backlog probably is as long as the way to the moon right now and Ashley most likely still got another (real) life to handle as well (like in eating, sleeping, working etc.)
The cluster is probably still rebuilding ....
I salvaged the kvm command in the error message from the events history
before I ,as well, reverted to the 'Could not create PVE2_API object' message.
From this it can be seen that the 1TB disk is erasure coded (something like RAID 5 or 6 )
BTW mine is number 5xx but was running on node cn02 - so @Falzo might be quite close with his earlier estimate of the cluster size.
My guess is, the storage VM's are provisioned on the storage machines - which are now busy rebuilding the cluster. - It might make sense to repurpose an 10Gbit uplink to boost the storage network while this is going on - and also to stop any machines hanging in the bootloader and using cpu while this is going on. - A machine hanging in grub can max out a CPU.
Now a daily update to all would have been nice - I didn't get the last one either - you might say:
"An update a day makes the customer stay"
and might also prevent a stream of tickets asking for updates etc.
To those considering powershopping on BF - Multiple copies distributed over multiple datacenters and also multiple providers.
That being said - I see no reason not to continue with Ashley as one of them. - The hardware is there, and the storage could be reconfigured to a different form of network storage or maybe an older more stable release of Ceph.
It will be interresting to see what Ashley comes up with
Yeah I've had an update this afternoon saying the OS pool is 100% but the storage pool is still working, with no 100% ETA.
Didn't get that either ... :-(
OK - so you are the lucky one currently receiveng updates - please do keep us posted ;-)
Thanks !
>
There have been updates I haven't received though for some reason. Group effort going on in here lol.
+1
I got the first mails but stopped receiving anything for the last two days...
@AshleyUK could simply post updates on network status page, but he stopped doing it week ago. I hope I'll be able to recover data and move to more reliable hosting.
while I agree it's annoying to sit and wait, would it change that situation in any way if he would post 'still working on it' once an hour?
if you want to move on, what stopped you from getting a server somewhere else and restore from your backups there. you could had your stuff up and running again since days...
Register zxhost.faith and use time(now) as an update for "In progress.". I'm not wasting $2 on that. :P
Well, the keyword is "backup". Most don't have it since their data isn't worth enough to bother.
Well if we end up with all data in the cluster recovered - I would call it reliable storage
If I would have to choose between: - reliable but it might take us a couple of days to recover from this disaster - and available: - just pray nothing goes wrong or your backups are toast ... I'd take reliable any day. - Available I can do locally with a single drive.
Anyway this is a bug in the latest Stable version of Ceph - and Proxmox is pushing it ...
... so I would say Ashley is not the one to blame !
In one of the e-mails I did get he wrote:
"By nature CEPH is built to be redundant, however it seem's due to a lack of QA on a recent release this has caused us to loose the redundant protection due to a software wide bug."
That being said it might be wise to have your second backup on something different than ceph.
could you post the message received this afternoon please ?
I don't need that, I understand ashley is really busy right now, but I just want daily official updates about this problem somewhere, like today we did x and tried y, recovered z. No details, just short info about current status taking 5 minutes of time. If I didn't know about this forum, I still wouldn't know that all data will be (probably?) recovered. And I am sure there are customers with pending support tickets not knowing about this thread.
I don't care how long it will take to fix. I just have some data there I really don't want to lose. Lesson learned however, it is not going to happen again, just gonna rsync everything next time.
I used zx's storage plan for my all data. Just before this issue happened a couple day, I suddenly re-considered about riskily backing up on single node and instantly moved everything I had to google drive... then this happened...
The most unlikable thing is that last mail is date 2017/11/20, no news by then.
No answer on ticket, nothing.
Is there anyone on Zxhost with vps online?
The 20th is also the last mail I got
However there has been later mails recieved on the 21st
and the 22nd.
I didn't get theese either ...
So seems like there has been a switch of mailing list at some point ....
So we assume the cluster is still rebuilding - - see earlier posts ...
Damn... I am curious if cashflow was not a problem from the beginning.
I don't really care about the pro-rota refund. I just want to know when will I get my data back.
Looking at this mail I don't think it will be possible...
My understanding from the latest email is that the data has now gone. Is that the correct assumption?
Surely if 23media have just cut access as a measure of non-payment the stuff is still there for now?
I don't know if they colocate there, whether that means the servers are zx's or 23media, etc? But usually don't providers just restrict access as a first line?
I think unless your vm is currently running the data is gone. Fine by me as the data i held there wasnt vital. and as everyone else has said you would be stupid to hold your only copy on a dirt cheap deal.
I don't quite understand, their provider take their environment offline, ¿ and the data ?, ¿is he working on recover the data from that enviroment? mine is in 23media GmbH, ¿ is that Telehouse Frankfurt ?
that email only gives a solution to the and i quote "small amount of LET Specials that had not been migrated from Hetzner yet at this time"
Oh I know I have my data elsewhere. I was just replying to the previous comment. Should have probably quoted!
All my data lost??????!!!!!!!
Yes!!!!!
That's not actually been said...
Your data is now gone, most possibly all ceph cluster disks already reformatted and given to new customers, you will get nothing back.
I worked with 23media before and I know their billing department, they are very kind people and will wait for you at least 2 months if you tell them your payment problems, But if you promise to pay at an exact date and not follow your promise they will shut everything down and cut you off, they are also a company not a charity.
Sorry all about that, Once last year I heard that zxhost is using ceph as their storage backend with proxmox, I was really interested and bought myself a 1 Tb plan myself, Nothing important over there for myself, only some Japanese action movies which I already forget.
I am really sorry to see a company go down like that because of some "trustworthy" opensource software.
This storage thing is a very interesting thing, if you want to be able to successful in this business you will need some other kind of failsafe system other than somewhat "corporate" looking systems, Maybe if some one can offer pure mountable remote NFS servers it will be more than enough and much more easier to maintain than these complete cluster filesystems etc.
Now it's time for you to go and buy some cold water and drink over your most valuable data and move on..