New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Yeah I see.
Might switch over aswell.
If you don't have much data, I remember there is an article about using MEGA as a backup media...
EDIT: found it, here: http://www.matteomattei.com/backup-your-server-on-mega-co-nz-using-megatools/
Yeah I think they had like 50GB free at some point and fast enough.
One Weird Trick To Backup 50 GB - FBI Loves to Hate It!
EDIT2:
How has no one mentioned 1Fichier yet?
Ansible scripts which do backup and restore(saved on AWS S3). Incase anything is down, it will auto restore from backups. When I want to move VMs it will fetch from backups as well.
Backwhat?
That thingy where you upload your stuff directly to the NSA.
Instead of manually specifying the
compresscmd
,uncompresscmd
andcompressoptions
for each logrotate config file, you can instead compile your own version of logrotate that uses zstd by default, by using the--with-compress-command
,--with-decompress-command
and--with-compress-extension
arguments to./configure
I like to tell myself it's better for the environment to upload them straight to the NSA rather than them having to fetch it, which consumes a lot of energy.
Regardless, having access to backups for agencies is a low prio, opposed to having access to live systems. And since backups are essentially just a copy of a live system there's no benefit in backups for them..
I have google drive w/ unlimited storage, shared web hosting, manged server w/ FTP access, and a few unmanaged servers
How should I setup my automated backup solution since I'm doing everything manually.
I was thinking FTP to a server for backups and then sync it to GDrive
Since there's been some discussion about compression methods, I'd just add that borg can be instructed to compress only compressible data with your compression method of choice
Supported compression methods are: none, lz4,zstd, zlib, lzma(xz)
I'm usually fine with auto,lzma,5 (default level for lzma would be "6")
indeed.. though not everyone would have zstd installed by default, so probably better via config file edits
nice that it supports zstd !
if combined with the built-in heuristic it could arguably be picked with a little bit more "aggressive" compression levels; zstd has been introduced in ver. 1.1.4 (circa December 2017) and as the man page goes
so the user has to decide if compatibility with ancient borg versions is relevant or not. Luckily, it's always possible to
I used rdiff-backup for years, but switched to borg some time ago.
Borg can also incrementally backup block devices and still use deduplication. Very useful.
It's only $20/year for the CLI license.
update: just checked earlier and noticed dropbox doesn't really support linux as much any more, so now using MEGA
I used mega a few years ago.
Worked pretty well using the cli-tools.
Just "encrypt" it and it's probably fine.
EDIT:
*megatools.
I also forgot to mention that the btrfs file system supports automatic transparent zstd compression, so another option is to use btrfs, enable compression, and then just have logrotate rotate it with no compression (rely on the file system to do it for you).
I think ZFS is adding zstd support too.
Zstd is the future.
And the future is now.
yup https://btrfs.wiki.kernel.org/index.php/Compression
I also updated my zstd benchmarks as well. I have since added pigz level 11 for zopfli compression and added all 32 levels of compression for zstd - from negative levels (-10 to -1) which focus on compression speed, normal levels 1 to 19 and then the 3 ultra levels 20 to 22 which focus on better compression ratios https://community.centminmod.com/threads/custom-tar-archiver-rpm-build-with-facebook-zstd-compression-support.16243/#post-70033
Cron + mysqldump + megatools
by the grace of god and these two fingers
I like zstd but I don't like the hype around it.
The truth is that zstd is not "the best" compression/decompression - the same is true for any of the others.
Compressing (typically already compressed) videos is very different from text/ascii and is very different from binary data blobs or exectutables. And thousands and thousands of small files is very different from a couple of large files. And one and the same compression/decompression can deliver quite different results depending on the processor. And tuning or pre-training for different file types (if available) can make a big difference too.
What makes zstd the (quite probably) best allround mechanism is that it's quite configurable and that it delivers above average and usually even really good results with pretty much anything you throw at it.
In other words: Using zstd is almost always (for any use case/file type) among the best, which means that it's ideal for "fire and forget", no need to analyze or fine tune; just throw zstd at it and be done.
yeah just look at the advanced options section for zstd so many extra paramters to tune for further speed or compression ratio efficiencies https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md
Thanks, I must have been on the Vertical Backup page right before I posted.
In my case - I have 48TiB of usable storage for 180 euro, which puts the price at 3.75 euro per terabyte. Sure I could compress stuff and save some space, but on the other hand, because files are stored in a raw format, it makes restores super easy, because it's literally just rsync stuff back to the source server, without anything.
I do shared hosting, meaning working with a gazillion files, and even with rsync and compression disabled, I never actually utilize the gigabit link in the server, not because of lack of connectivity, but simply because the amount of files being handled, so the network generally isn't the problem, also not during restores even.
zstd would be nice as a compression for rsync transfers if it's a lot faster to do compressions, because in cases where network would be a bottleneck, you could benefit from it during the transport time - currently I do rsync without compression, because the compression overhead actually causes the backups and/or restores to take longer time than without compression :-D
So I might look at it for transport only - for storing on the system, I don't mind it being uncompressed, simply because the actual storage requirements aren't too big (30 daily backups and 6x monthly) currently takes up 22 terabyte of storage, which isn't too much.
You might be better off with AES-NI accelerated ciphers on your SSH connection, and disabled compression
BTW, LZ4 (same author as ZSTD) has the fastest decoder at the cost of compression ratios. https://lz4.github.io/lz4/
Me too! I'm on the north side (Caboolture) where are you? I'm excited to see a fellow Brisbanite here lol
As for backup strategy, mysqldump + duplicity for each service or VPS depending on how it's set up.