New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Low End Backups - How do you backup your small VPS?
Hi,
We all know that they are big and good solutions like r1soft, jetpack, borg and others for critical and big servers.
But what product/strategy do you use for your small vps?
For example, for my prodution VPS i just backup via rsync (files and mysql dumps) to another VPS in another datacenter... but i feel "dirty"
What strategies/solutions do you use?
Comments
Borg is easy enough to setup so I don't distinguish small VPS from production systems for backups. I've wrapped Borg in a homebrew script that mails the output and preprovisions the public key on my Borg server, provides built in mysqldump support, etc.
Then from the Borg server I rsync the backups to my homeserver, TransIP Stack & Backblaze B2 storage.
I have RSync setup on all my VPS servers and they backup to my storage server in Online's Data center in France. Then every night between 7-9 my server at home takes a image of the server in France and saves it. Going to set it up to do it more frequently when I get my new server at home. That's about it really.
I don't know why you put borg in the "only for big stuff" bucket. It's versatile and can be used and deployed everywhere. I use it everywhere; if for some reasons in some cases it seems "too much" for you there are quite a few wrappers for it
My bad, when i said "borg" in that line i was refering to the "product" with hosting that a user from the forum sells, not the software itself.
So for Borg (software) users, what kind of server do you have? Dedicated or VPS?
BackupPC
A daily image is not... a lot of GB per image?
If it's small enough, SSH in, tarball the whole folder and send it to email.
rsnapshot to my homelab NAS nightly.
It's quite large but it just a image of the backup VM. I have the system folders set so they aren't included in the image. The server at home is 8TB raid 1 so there's plenty of room. I will just wipe the older images if all the files are intact on the newer images. I do this about every 14 days.
Google Drive unlimited is neat.
With grive (sync)?
With rclone and some cron jobs. Very neat combination for backups, encrypted or not.
cronjob upload daily backup.
what client do you use? rclone like sgheghele?
Google grdive github. I think it was prasmussen
rclone is love. rclone is life. rclone is a mean to an end.
No, really. I just create compressed tars (of files, db dumps, ..) that I either rclone move or move to a rclone mount (some are encrypted). Nothing more than that. You can timestamp them and you are happy to go. With an unlimited GDrive account, I am not wasting much time to optimize this.
I have other backup systems in places according to the specific apps. For example, my Wordpress instances perform their backups with Backupbuddy.
I used to use duplicity, not sure when I stopped. Borg looks cool, but never bothered to inspect it.
My current servers are LXC containers so very easy to snapshot & backup.
LXD provides commands for that.
I'm not.
I have totally four servers running with the same data mirrored (mysql replication, rsync and unison file synchronization, some scripts to adjust variables.
All servers are monitoring each other and if one becomes unreachable, DNS records used by clients are automatically changed to another server.
This way I do have enough replicas, an automated failover and afterwards I have plenty of time to (manually) restore the failed server.
Cron job that runs Duplicity -> Amazon S3 and then emails me the output.
None. You don't need backups.
Why? It's a good start. I do the same. Then, on the server holding the backups, run rdiff-backup against the backed-up data to create incremental restore points -- daily, weekly, whatever works for you.
cron, rsync.
EDIT2:
Every 30 mins if you're paranoid like me.
rsync is like Raid, tar your latest sync and store it as tar ballen.
Otherwise it may raid you, amen.
Rsync'd to another DC and then borg'd to rsync.net. If it's not automated, it doesn't happen.
I have one dedicated server running borg and another dedicated running restic (which I prefer over borg). All servers are backed up to borg each morning and to restic each evening. I also replicate the restic repositories to a server I have at home.
I have a storage VPS from BudgetNode ($24 per year for 250 GB space: https://www.lowendtalk.com/discussion/136417/512mb-kvm-18-year-2gb-kvm-3-99-month-paid-yearly-500gb-kvm-storage-3-99-month-paid-yearly) that I use to store backups of all my other VPSes. I use Backupninja and Borgbackup for the Linux VPSes, and Duplicati for the one Windows VPS I've got. All backups run once per day and include files, MySQL database dumps (from both my self-hosted MySQL servers as well as from my DBs on BuyVM's offloaded SQL servers), and DNS zone dumps (exported from ClouDNS using https://github.com/tokiwinter/cloudns-api/pull/2)
Oh, I see.
Generally, and depending on how much I care about that data, I incrementally backup to a VPS hosted by a different provider (and possibly in a different country), or to two VPS. If there's quite a lot of data, a dedi is usually more convenient
Before a critical upgrade I may consider to perform a backup to a local disk as well and/or to vet and test recovery procedures
those should be pretty similar, any reason to use two different tools? If a disaster happens, I'd prefer to follow a recovery path the more straightforward and unequivocal as possible; having to deal with two different tools seems unintuitive. Maybe you fear a bug in one of those tools? Also, why do you prefer restic in this scenario (your own dedi with ssh access)?
Dropbox and simple shell script
rclone with some scripts I wrote for Backblaze B2 buckets per server. Works great and costs me <$1/month for the data I have.
I create tar balls and push them to unlimited Google Drive using rclone. Using cronjobs to automate this process.