New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
yes I agree as the timeframe clearly fits. I just believed they had only the storage services on that ceph cluster and nothing else... as said before, my none storage VMs are running smoothly.
for the mails it seems that some people are not getting those, so worst case it might be that they get blocked by the mail provider? I guess there has been no time for checking the bounces yet. Did you receive the usual notification after opening your tickets before this incident?
TLDR: distributed systems are hard.
It was nice to see provider being prepared to offer an EOL date and refunds.
And my two servers with you are in the 1.8% ... ? You cannot access my backup server ?
Got those, yes. And I don't have a spam filter at all, so maybe it's something else. Whatever, I really hope it is fixed soon.
I don't know too much about this things, but maybe 1% of your VM is in that 1.8%, and that would end in corrupted data.
as he said there is no way of telling which VMs might (partially) access that 1.8% and therefore he can't turn on the whole storage cluster at all, without risking data loss across all of the VMs.
the other way around, if he would start bringing the cluster back online it'll do an fsck and mostly likely erase that 1.8% of data not online yet - which might then effectivly destroy more than 1.8% of all VMs.
so, it's just unpredictable as no one can tell what VMs are connected to that missing parts...
yes no mails spam box.
BF is only 1 week away lets see what happens
@AshleyUk i just need 6-8 GB in zip files of my backup server. I understand your situation but is already 3 days. Give me a solution.
The two vms i have with you don't excede the 13 GB. Is not a Lot of data i'm asking.
I've been following this as am also affected.
I realise it's not easy when trying to resolve a software issue that's not your own. I have replied to one of the outage updates stating I am happy to wait. Most of the data is backed up, but there a couple of things on from the file system I could do with - either by waiting for the issue to be resolved or some other method (I don't know about CEPH or whether disc images can be shared, etc).
It's a shame for them to EOL the storage, and it's probably not easy for them but these things happen, I've not had my VPS with zxhost long but other than the odd bug rebooting it (that could have been me being impatient and hitting boot too quickly) it's been fine!
I didn't get the latest email for some reason but have had the rest. There were a few long periods between updates, but I get that they're busy.
No, I got a different email. In the email I got it says "You are so handsome that we cannot handle it and therefore we are going to cut off your services as of Jan 2018"
All other providers, time to think of your top Black Friday storage deals. Looks like there will be lots of us in the market.
the severs got stuck by your handsomeness.. make sure you never enter a datacenter.
It is sad to see this great service go. I will like to thank Ashley for taking ownership and for being responsable to his customers. Something we don't see much around here....
What part of what @Falzo said did you not understand?
Why stop early January?
VPS does not work, you have to wait until the works to tell when you are going to stop.
I think you can give 60 days, that's reasonable.
It's almost the end of the year, I have several VPS, it will be difficult to find another supplier, and to make the transfer so quickly.
Take the time to think
It's nice of them to announce it prior to black friday though.
+1 just the right time to grab something cheap new shiny )
If it's just storage you need I'm still happy with 1Fichier @10€/year
Technically if they get it fixed this weekend you'd have 60 days if they shutdown for end of January :P
Francisco
I hadn't really utilised my Storage server yet, but sad / disappointed that it will be cancelled but I understand why. Hopefully a solution can be found before hand.
This is sad news and a blow to the use of Ceph in the LE world. I suspected the 96€/3yr deal was too good to be true, but not in this way.
Has anyone used any other large-scale Ceph services? I've only sampled DreamObjects -- they are not low-end price -- anyone used them? (Ceph was largely written by DreamHost).
For those who are looking for ultra-cheap storage, remember there's always the cloud archive options:
haha - no :-P
while I use storage as storage that's not the kind of storage I am talking about ;-)
I do a lot of rsync or rdiff-backup and stuff and require to be 'root' and able to change owners/permissions accordingly...
and even for backups of backups I don't like crappy space which needs to copy the whole file internally for just a rename or can't handle big archives/diskimages and such.
but I remembered you talking about euserv and finally signed up for their 1TB free ftp tier, to drop off some old proxmox archives and free other space to use for my backups later on - so thanks to pointing to that in the first place!
Haha no worries
Sure thing!
Hey @AshleyUK thanks for your hard work on this. I wonder what the situation with Ceph really is, since OVH and others are using it with fewer issues. Maybe you could consider an alternative like GlusterFS or whatever @Francisco is doing for his slabs? Otherwise, what are you going to do with all the HDD hardware?
You don't need to do any data recovery heroics with my plans, if that helps. I have a 1TB and a 2TB plan with you. The 1TB is a backup of data that's already well-replicated elsewhere, so if it gets trashed I'll be ok. I hadn't even gotten around to moving stuff to the 2TB yet. There might be some stuff in the primary (non-Ceph?) partition that I'd like to get back, but if I don't remember what it was, it can't be important.
Regarding C14 "Online C14 (2€/T/mo + 10€/T bw)", it has an annoying "feature" that the 10€/T is not just for i/o but per "operation" including deletion. So if you upload a TB of data to it, never restore from it, and eventually delete it, you still pay 20€. And of course it's another 10€ if you do download. Also they insist on billing your credit card monthly instead of letting you pay in advance. That means if there's a sudden problem with your credit card (that happens a LOT with me: a phone call to the card company usually clears it up, but it's a hassle) you can find yourself dealing with drama. So I'm more likely to use OVH Cloud Archive which doesn't charge for deletion and which lets you pay in advance. I do use C14 intensive for some things and I guess it's ok.
It would be interesting for LET hosts to offer a C14-like service.
glusterfs can be a fickle too, especially if you want to make it secure. also it usually isn't the fastest storage on earth.
that thought crossed my mind too...
As I recall, both partitions were on Ceph (with local SSD cache); the idea was that large I/O operations on the big partition wouldn't block the small partition, but they're both network storage.
I also am ok if my data isn't recovered, but I suspect that in recovering Ceph it's kind of "all-or nothing"; the 1.8% of OSDs offline could hold data from any number of our VMs, and bringing the cluster online could corrupt everyone's data.
I also have a non-Ceph 1T plan; it was migrated (temporarily, I hope) to a LXC in GRA, and now is offline. So I'm not thrilled about that, but it's probably independent of this Ceph issue.
That's a really good point; thanks for the warning!
I'm specifically not documenting what we're basing our platform on just because its been a huge amount of research, development, testing, etc, on our end, that I'm not in any sort of rush to share that work.
Francisco
Ditto.
The last update I heard about the EOL did kind of hint (IMO) that there may be the possibility of a long term solution, or an alternative solution - but maybe I'm being optimistic.
If that was the case though I'd be happy to continue paying until Jan and then using a newer/updated service if a stable solution was found
I hope you are documenting it, but not making it public?
Goddammit :P
Francisco