New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I'm using B2 BackBlaze. Its one of the cheapest option(0,005$ per GB) in market and has 99.9 SLA.
Edit: Its also S3 compatible
2TB Storage - 2GB Ram - 1 Dedicated core - 4TB Transfer - USA €70 /year
Shh. 😉
I'm still waiting for.a client to wake up, get to his pc and order the darn thing.
@Shazan Max 750GB upload per account per day.
No session limit (that I've found).
I have mine mounted with rclone and write the full xtradb backup (ibdata etc), then download (stream) that to generate a .sql backup on a different server, I also periodically download that (stream) so that automated validation can run. Never had issues with limits.
Also, supposedly 10TB download limit per day.
Also, Autorclone can by pass 750GB/day limit.
Edit: Its also S3 compatible
Me as well, for home backups.
Only thing I don't like is that on VPS backups, if someone compromises your VPS, it's trivial for them to also nuke your B2 backups. This is a common weakness for all push-based backup schemes.
The trick is to use append only backups like with Borg. Plus of course snapshots that are immutable.
Not to say be lax about security, but there are options that can protect one from oneself and inane stupidity is far more probable than something sinister.
Can't you just solve this with file versioning and limited access permissions on the API keys? Can also set up lifecycle rules to trash old versions as necessary, so your bills are not astronomical. Depending on the backup method you're using, it might be that multiple versions are not created during normal operation anyway.
I really should have said "potential weakness". Yes, there are ways around this. You can setup backups so you can drop off files and then have a system that sweeps them into a place the user can't touch, or give the user write but not delete permission, etc.
That's a good point - you can grant "deleteBuckets, deleteFiles, deleteKeys, listBuckets, listFiles, listKeys, readFiles, shareFiles, writeBuckets, writeFiles, writeKeys" so presumably removing the delete* permissions would fix this problem. I think I'll give this a try and get out of having to run my own backup servers to backup my VPSes.
I'd say poor Google, but lets be honest here... they are making record profits currently.
I read somewhere that any storage > 2TB would require additional plan for cold storage?
Yes, more than 2TB cold storage (60+ days) costs extra. Extends to +1€/GB/month. Might be that there is an annual discount for that, too.
Scaleway C14 Cold Storage = €2/TB/month ex. traffic
Those here that use more than 2TB probably don't pay the extra 1€/month. You get unlimited "hot storage" (files touched every 59 days), so certain LET members have built scripts to download files via API once ever so often so it still counts as hot storage
This way you only pay the regular premium price.
There is also problem of using it with rclone. There is a rate limit error when using rclone even with the latest beta. I think it's not best suited for several files of smaller size.
I believe you could use
--tpslimit
and--tpslimit-burst
flag to avoid rate limit.Did try that but still throws the error.
May I know what is your tpslimit value?
It's certainly not built nor priced to be used what I'd call "comfortably". If I did not have Koofr Lifetime, I'd still have my sub though.
Is it still actual? Because as far as I see they offer 1TB for 8 euro.
Should be fine as long as you're not being a sociopath and uploading more than 10TB.
I mean it's fantastic for tertiary backups if you DO intend to abuse the non-existent storage limitations on single ac.
I have around 2-3TB of data that will be stored.
Do I need to buy their gsuite, which costs $10.5, right?
And pay only 10 dollars, instead of $50, correct?
Or do I need to create technically five different accounts?
"Privacy" - Simply do not trust anyone. Encrypt your data locally on the fly.
"Availability" for storage/backup actually has two factors, (a) availability (as in "you can get at your data/reliably upload"), and (b) durability which describes the storage itself (as in "your data will not be corrupted or lost").
For good availability you want at least 4 nines (99.99%) and for durability you want at least 6 nines (99.9999%).And yes, that also means that Raided drives alone don't cut it; you want some kind of erasure coding too.
If a provider doesn't provide durability info you shouldn't consider him to be a reliable storage provider.
Doesn't AMD Secure Encrypted Virtualization (SEV) fix that issue ?
https://developer.amd.com/sev/
P.S. We are waitng for ubuntu 20.10 full zfs
Not really, see -> https://thehackernews.com/2020/08/foreshadow-processor-vulnerability.html
Nexusbytes.
Heard of and planning to try AWS S3 Glacier deep archive. $1/TB
Most people will pay $.004 per Gb or $4/TB. At that rate, you'll be better off with B2.
Can you please share the bookmark.
Also, I am saving this comment and next time when I am ready to setup my VPS from scratch (as of now there's almost full 500GB is filled) I will try this approach. Appreciate it!