New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
How would you automate a 250-300 GB data backup into a storage VPS?
Hello LET,
I'm wondering. What's the best approach to automating the backup of two folders which have about 200GB of data in them, it's ever changing so it will probably go up to 250-300GB as time passes. But I want the backup folder to automatically update as I make changes to it. This is sensitive and important information though, so I don't want it on Google Drive, etc.
Comments
rsync
There are tons of other options of course, but rsync over a passwordless ssh key the simplest.
Thank you for your comment. As always, very informative.
Anything that can be installed into a server which has some type of GUI?
Windows?
Dude, just use Resilio. https://www.resilio.com/individuals/
If you're going server to server, you can turn off using the tracker and use predefined IP:port of your servers.
Looks good. I like it. 💪
https://syncthing.net/
rsnapshot
i use winscp and sync option with sftp, its an open source
Acronis Cyber Protect.
syncthing, rclone, or borg. I'm using all 3 depending on the requirements. I would not store sensitive info on any cloud server without encryption, period. rclone and borg has encryption built-in.
BackupPC works fine for me. I use the web UI
I like that way of thinking. 💪
https://duplicacy.com/home.html
I've had good success with rdiff-backup ages ago.
Today I use rsync, cronjobs, and a btrfs filesystem on the target. Use rsync to move the files. Then use btrfs to manage snapshotting.
To get a sense for what's where (disk space usage) baobab is ok if your server runs a gui. 'du -kax / | sort -n' if not.
There's another tool called agedu and it's pretty neat. It builds an index first of all files, and then you can use that index on cli or in a spontaneously started http server you can get to via ssh port forward. It shows now just how much data is where, but uses color to show the relative age of it. I haven't used it in a while, but it was solid when I did.
You could also make both the sender and receiver filesystems zfs or btrfs and use their respective 'send' functionality, though I'd prefer zfs send due to it being newer in btr if its ready at all.