Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Crashplan no longer offer Crashplan for Home subscriptions
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Crashplan no longer offer Crashplan for Home subscriptions

mikhomikho Member, Host Rep

Important Changes to
CrashPlan® for Home Service

Effective August 22, 2017, Code42 will no longer offer new – or renew – CrashPlan for Home subscriptions, and we will begin to sunset the product over several months.
CrashPlan for Home will no longer be available for use starting October 23, 2018.

At Code42, protecting your data is important to us. As we shift our business strategy to focus exclusively on enterprise and small business segments, you have two great options to continue getting the best backup solution.

Please read on to determine how this change will impact your CrashPlan for Home subscription and what steps you should take to ensure a smooth transition to a new backup solution.

https://www.crashplan.com/en-us/consumer/nextsteps/

Thanked by 1raindog308
«1

Comments

  • raindog308raindog308 Administrator, Veteran

    I've had the #1 feature request for CrashPlan for 7 years (throttle bandwidth by time of day). In fact, it's been listed as "planned" for 7 years. Hundreds of comments saying "we want this". People tweeting it at CrashPlan saying it's been 7 years...

    Today it's gone.

    I'm starting to wonder if it was ever really planned...

    Thanked by 1yomero
  • rm_rm_ IPv6 Advocate, Veteran
    edited August 2017

    For me it didn't use any bandwidth to speak of, it was mainly CPU limited (on a 8-core 4GHz machine), with its terrible bloated Java client.

    As for migration, they either want you to pay 2x as much (migrating to "Business"), or move to different company and software, having to reupload all your backups?

    What a load of bullshit.

    Thanked by 1yomero
  • WSSWSS Member
    edited August 2017

    Looks like their personal plans..

    crashed.

  • So the guy who said this would happen on Reddit in early June was correct. I figured it was just bullshit because he said it would be done by the end of June.

    Thanked by 2Rhys quicksilver03
  • For home computer backup, we recommend our exclusive partner, Carbonite. Carbonite offers simple, secure cloud backup for computers. Subscriptions include free, award-winning customer support, 7 days a week.

    So it's free when I pay for it? Waow.

    Thanked by 1WSS
  • WSSWSS Member

    Carbonite went to shit a few years ago. Your best bet is just managing your own offsites.

  • HarambeHarambe Member, Host Rep

    @WSS said:
    Carbonite went to shit a few years ago. Your best bet is just managing your own offsites.

    +1. And if you want some great software to encrypt + handle the uploads for you: https://www.arqbackup.com

  • TomTom Member

    Someone posted on Reddit this was going to happen a few months back. It did. I am not surprised.

    Using Backblaze myself at the moment, but probably want to switch from that too.

    Thanked by 1maldovia
  • raindog308raindog308 Administrator, Veteran

    Carbonite doesn't support Linux.

    Backblaze doesn't support Linux.

    Thanked by 1flatland_spider
  • TravTrav Member

    Finding a good backup client for Linux is now even tougher, CrashPlan wasn't the best but at least it worked.

  • mikhomikho Member, Host Rep

    If the backup client was coded in java, would that be an acceptable solution?

    I know its not the best but it works.

  • The 30-day deletion policy of BackBlaze and Carbonite is a deal breaker. I've discovered accidental deletions a year or more later (several times) and been able to restore the files with CrashPlan. This is a major bummer, but at least they gave us time to find another solution.

    Thanked by 1quicksilver03
  • Backblaze B2 is a good option that might even be cheaper than regular Backblaze if you don't have a lot of data. Duplicity and its GUI Deja Dup are good Linux clients from what I heard.

  • WilliamWilliam Member
    edited August 2017

    Well well well, what do you guys expect?

    I said it since years and this never changed; such a business with unlimited storage is not sustainable, at all. By simple math, even with dropping HDD (and overall HW) prices.

    And if everyone starts to encrypt locally this prevents server side deduplication - if you are able to by insane size of the memory pool as AWS - which kills the service in merely months, even on AWS....

    mikho said: If the backup client was coded in java, would that be an acceptable solution?

    I know its not the best but it works.

    Crashplan is JAVA. And absolutely useless by this.

  • WSSWSS Member

    @William said:
    Well well well, what do you guys expect?
    And if everyone starts to encrypt locally this prevents server side deduplication - if you are able to by insane size of the memory pool as AWS - which kills the service in merely months, even on AWS....

    Because localized incrementals are so fucking hard if you trust time_t?

  • WSS said: Because localized incrementals are so fucking hard if you trust time_t?

    No, but the server side cannot force this, thus it is not a valid point to use for calculating operating costs.

  • WSSWSS Member

    @William said:

    WSS said: Because localized incrementals are so fucking hard if you trust time_t?

    No, but the server side cannot force this, thus it is not a valid point to use for calculating operating costs.

    You can, if it's built in?

  • mikhomikho Member, Host Rep

    @William said:

    mikho said: If the backup client was coded in java, would that be an acceptable solution?

    I know its not the best but it works.

    Crashplan is JAVA. And absolutely useless by this.

    There are other alternatives, like ahsay.com.
    They even have a partner list on their site, sorted by country.

    We sold it at my previous job.

  • raindog308raindog308 Administrator, Veteran

    William said: I said it since years and this never changed; such a business with unlimited storage is not sustainable, at all. By simple math, even with dropping HDD (and overall HW) prices.

    In CP's case, I think it was only semi-unlimited because their client never pushed anything close to pipe speed. I think it topped out at 1 or 2Mbps. So in a sense, it was limited by the fact that CP would only accept so much so fast - i.e., the "limit" was 1Mbps times 365 x 24 x7.

    But yeah, overall you're right, especially as CP had much more generous retention policies.

    And if everyone starts to encrypt locally this prevents server side deduplication - if you are able to by insane size of the memory pool as AWS - which kills the service in merely months, even on AWS....

    Weirdly, CP supported encryption from the start, and I think you could specify your own key also. I never understood how that blended with their dedupe strategy.

  • @William said:
    Crashplan is JAVA. And absolutely useless by this.

    Crashplan's other software for Small Businesses is an Electron app, which is as bad as you can get today. Those who will take up on their offer will regret the Java app (which, as a former Java developer, was fairly easy to configure and tune).

    Thanked by 1flatland_spider
  • raindog308raindog308 Administrator, Veteran

    My experience with CP was:

    (1) If you have any kind of sizable data, whatever is backing up needs massive RAM. I had a TB or so and had to add 16GB of RAM to my file server otherwise the client died constantly. For perspective, it was an i5 with 8GB of RAM, so feeding 1Mbps to CP should be easy but...it needed a ton of RAM.

    (2) Every so often it would go into some maintenance mode in which case no restores could be done. For days.

    (3) Eventually I realized the issue was the number of files. I took some of my archival stuff and tarred it up, and CP was happier...

    (3) ...until it upgraded or patched itself. One thing it always did was overwrite run.conf, which removed my "use 16GB of RAM" parameters, which meant that CP promptly crashed. I eventually had to write a "run.conf has been modified" monitor...such silliness.

    (4) Support was an awful group of script-readers. Even if I loved CP, I sure wouldn't use them for business.

  • SpiderOak is offering a unlimited plan (100000000 TB) for $250/year.

  • @mtsbatalha said:
    SpiderOak is offering a unlimited plan (100000000 TB) for $250/year.

    Do you have a link where that's offered? I only see the 5 TB for $279/year or $9/person for unlimited with a 10-person minimum on their site.

  • @jaden said:

    @mtsbatalha said:
    SpiderOak is offering a unlimited plan (100000000 TB) for $250/year.

    Do you have a link where that's offered? I only see the 5 TB for $279/year or $9/person for unlimited with a 10-person minimum on their site.

    PM me....

  • WilliamWilliam Member
    edited August 2017

    WSS said: You can, if it's built in?

    You cannot, as you have root local and can manipulate to send a full backup which the other side by encryption cannot verify as being incremental data only (for all they know it could have changed that much).

    Deduplication of files on server side however cannot be manipulated/circumvented unless you encrypt local.

    Lastly you forget ONE MAJOR point - deduplication server side can be done across all customers, so 17 million Windows 10 installations only take up... 1...

    Thanked by 1WSS
  • WilliamWilliam Member
    edited August 2017

    raindog308 said: Weirdly, CP supported encryption from the start, and I think you could specify your own key also. I never understood how that blended with their dedupe strategy.

    From what i gathered when i tried it this is done, sort of, local - which is why the client need(s|ed) that much RAM partly, basically what @WSS described in a way worse implementation and top of that based on... JAVA....

    This can (and, by technology, does) work, yes, in a correct implementation (but there is no way to have it 100% manipulation safe client side with encryption) which seems to be code wise very hard to do, like a year project for a good team and that would not include any useful GUI or mobile apps.

    I do have a theory how it works, and we just did a flowchart what would be needed component wise, but all of that looks very expensive in hardware needed to ensure safety of the dedup tables (which you also need to keep local) while not using TBs of RAM but still guarantee somewhat reasonable retrieval speed; by one database backend i'm also limited to 255 billion unique files but this might be solved by using something more specialised. Could be done with some AWS services (S3 would avoid the single backend issues by providing a load balanced access point) as well but hella expensive.

    Download speed would be fast but getting the actual download location where you retrieve your file could take a while (less than glacier however, in the minute range at most) if the user base is large - this can be circumvented by more clusters but that again drops the dedup efficiency by 50%+ unless you write cross-cluster dedup (and manage to sync it live)... yea let's not get into this area, it's not feasible at a reasonable price.

    In essence, sure this can be done perfectly but i don't see it happening at prices this companies offer right now, especially if unlike Amazon they have to buy hardware specifically for the project and have no re-use if fails/customer loss (whereas AWS can just redelegate to EC2/S3 etc).

  • quicksilver03quicksilver03 Member
    edited August 2017

    @raindog308 said:
    My experience with CP was:

    (1) If you have any kind of sizable data, whatever is backing up needs massive RAM. I had a TB or so and had to add 16GB of RAM to my file server otherwise the client died constantly. For perspective, it was an i5 with 8GB of RAM, so feeding 1Mbps to CP should be easy but...it needed a ton of RAM.

    Unfortunately true, one can disable the "Watch file system in real time" setting to have CrashPlan require less RAM but then one important functionality is lost.

    (3) ...until it upgraded or patched itself. One thing it always did was overwrite run.conf, which removed my "use 16GB of RAM" parameters, which meant that CP promptly crashed. I eventually had to write a "run.conf has been modified" monitor...such silliness.

    The CrashPlan installer is indeed atrocious, I've never let it do anything on my systems. And the worst is that the run.conf manipulations aren't really needed at all, you can just launch CrashPlan like any Java server-side application and it works perfectly:

    /usr/local/java/bin/java \ -classpath /usr/local/crashplan/lib/com.backup42.desktop.jar:/usr/local/crashplan/lang/:/usr/local/crashplan/ \ -Djava.library.path=/usr/local/crashplan -Djava.io.tmpdir=/var/lib/crashplan \ -Dfile.encoding=UTF-8 -Djava.net.preferIPv4Stack=true \ -Xms32M -Xmx512M \ -XX:+PrintGC -XX:+PrintGCApplicationConcurrentTime \ -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps \ -XX:+PrintGCDetails -XX:+PrintGCTimeStamps \ -Xloggc:/var/log/crashplan/gc.log -XX:+UseGCLogFileRotation \ -XX:GCLogFileSize=16M -XX:NumberOfGCLogFiles=2 \ com.backup42.service.CPService

  • WSSWSS Member

    @William Personally, I trust both Java, and an external provider with Ring 0 access.

  • WSS said: @William Personally, I trust both Java, and an external provider with Ring 0 access.

    I only trust Apple with time machine, and even that only using local encryption.

    Anything else, running as root or not, sending backups to external will not be ever touching my system. If anything i run this on top of my external time machine to backup from there incremental.

  • WSSWSS Member

    @William said:

    WSS said: @William Personally, I trust both Java, and an external provider with Ring 0 access.

    I only trust Apple with time machine

Sign In or Register to comment.