Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


restic vs BorgBackup vs Kopia for Linux server backups?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

restic vs BorgBackup vs Kopia for Linux server backups?

Hello folks,

For my Linux server backups, i really love BorgBackup but i'm looking now for something that natively support S3. I've heard few good things about Kopia and feel like it's the way i may go. I would prefer something free so Duplicacy, which i use and like a lot for my home personnal usage, is not an option. I also want to use encryption, compression and dedup.

What do you use for your backups that support S3 as backend?

On the side i've read few discussions regarding encryption methods, looks like some use good encryption, some others use double encryption which sounds a bad idea (and also use of OpenSSL) regarding some people with security interest, some use also homemade encryption while there is existing good encryption algorithms. I'm not that used to advanced security like that but as most of us i try to make the best choices for long term use.

In the past i've used a lot BackupPC. Useful frontend when your team is not that advanced with SSH, you can keep the compression locally to the BackupPC server even if you get file from remote, etc.

What's your input?

Thanks a lot for all your advices and app sharing!

Backup tool for Linux servers
  1. Which backup tool for Linux servers?55 votes
    1. BorgBackup
      43.64%
    2. restic
      29.09%
    3. attic
        0.00%
    4. Kopia
        7.27%
    5. Duplicacy
        5.45%
    6. Duplicati
        5.45%
    7. BackupPC
        9.09%

Comments

  • BackupPC is easy to use, reliable, easy on resources, able to recover single file to destination machine or just download. It is perfect.

    Thanked by 2o_be_one Prime404
  • So i guess nobody is backuping his server here xD

  • I have simple scripts and cronjobs that tar my stuff onto Minio.

    Thanked by 1o_be_one
  • restic works for me nicely. Only missing feature is compression.

    Thanked by 1o_be_one
  • rcy026rcy026 Member
    edited September 2021

    When I need something free, I use restic. It just works, it does exactly what it is supposed to do, so I really can not complain about anything. Like with all good software you almost forget that you use it.

    But I have heard a lot of good stuff about Kopia, so it is on my to-do list to try it out.

    Thanked by 1o_be_one
  • With Kopia, when the machine doing the backups is terminated/offline/FUBAR, I will be able to restore its data from another server? i.e. all the relevant info/metadata about the repo is saved in the cloud storage with the snapshots? Sorry if that sounds stupid, first time reading about Kopia but it looks neat with GCS and S3/BB/MinIO support, out of box incremental and dedup.

    Thanked by 1Falzo
  • I’d say BackupPC as it is very much underrated and does an excellent job even for backups of larger datasets.
    It requires no local software other than rsync and SSH on the clients you wish to backup, as well as supports things like dedupe and compression.

    "This status was generated at 2021-08-12 12:18.
    Pool file system was recently at 25% (2021-08-12 12:16), today's max is 25% (2021-08-12 01:00) and yesterday's max was 26%.
    Hosts with good Backups:
    There are 14 hosts that have been backed up, for a total of:
    36 full backups of total size 8503.32GB (prior to pooling and compression),
    398 incr backups of total size 5333.13GB (prior to pooling and compression)."

    Thanked by 1o_be_one
  • No matter what you do, stay away from Duplicati. It works great to backup files but when it's time to restore any file, good luck. I lost a 500GB+ repo because of this POS software. Works for small repos, but the more file you have, the slower it gets... to a point where is simply stops working all together.

  • Cloudflare block me to write guide here, I wrote the guide about restic here: https://telegra.ph/Restic-backup-guide-09-10

    I hope it will be useful for newbies who want to try the utility, but faced barriers when trying to get the utility work.

    Thanked by 1o_be_one
  • @Kassem said: With Kopia, when the machine doing the backups is terminated/offline/FUBAR, I will be able to restore its data from another server? i.e. all the relevant info/metadata about the repo is saved in the cloud storage with the snapshots?

    Interesting question, i guess will have to check, it's important. I was assuming that it's like BorgBackup and others, meaning you just need your encryption password to get your files, then you can paste your files where you want even on other servers.

  • @Prime404 said: I’d say BackupPC as it is very much underrated and does an excellent job even for backups of larger datasets.

    It requires no local software other than rsync and SSH on the clients you wish to backup, as well as supports things like dedupe and compression.

    BackupPC is the best backup tool i've used ever, easy to collaborate with team not used to IT tools, for example. It does the job just well, it's stable as well. Totally agree ^^. It helped a lot to not loose performances on prod game servers as i was doing compression on BackupPC node and not on remote clients.

    Thanked by 1k4zz
  • @agonyzt said:
    … but when it's time to restore any file, good luck. I lost a 500GB+ repo …

    Which of course highlights the vital follow-on question: how are you all testing your backups, and how often?!

    Thanked by 1o_be_one
  • @o_be_one said:

    @Prime404 said: I’d say BackupPC as it is very much underrated and does an excellent job even for backups of larger datasets.

    It requires no local software other than rsync and SSH on the clients you wish to backup, as well as supports things like dedupe and compression.

    BackupPC is the best backup tool i've used ever, easy to collaborate with team not used to IT tools, for example. It does the job just well, it's stable as well. Totally agree ^^. It helped a lot to not loose performances on prod game servers as i was doing compression on BackupPC node and not on remote clients.

    Yes it’s nice for what it is and I was initially thrown away by the old outdated front end interface, but it’s one of the better solutions that is not a pain to setup (like eg. Baccula)

    Only thing to be vary about is to select a file system that can handle many inodes as it’s all based on hardlinks for the dedupe, which adds up very quick which can be problematic.

    Thanked by 1o_be_one
  • darkimmortaldarkimmortal Member
    edited September 2021

    i really love BorgBackup but i'm looking now for something that natively support S3.

    You could use borg with rclone mount (with vfs cache) to add s3 support

    Thanked by 1lanefu
  • skorupionskorupion Member, Host Rep

    Just live on the edge like me and don't backup

    Thanked by 1o_be_one
  • kopia is nice, you have a gui, you can easy restore files even on other computers (attaching the virtual drive), you have a main server so you don't really need to configure your backup place every time (like connection to OneDrive)

  • Proxmox backup server for VM's hosted on Proxmox (And a file level backup of a couple debian instances).

    Restic for most other stuff

    Will have a look at kopia.

    Thanked by 1o_be_one
  • Long live tar, ftp, and cron :)

    (This said, I probably should look into one of those fancy solutions :) )

  • Just curious: for those of you who use BackupPC, do you use v3 or v4?

    Apparently, v4 is quite different from v3, and for a long time, Debian didn't package v4, and it's only now with Debian 11 that v4 is available in Debian stable.

  • @angstrom said:
    Just curious: for those of you who use BackupPC, do you use v3 or v4?

    Apparently, v4 is quite different from v3, and for a long time, Debian didn't package v4, and it's only now with Debian 11 that v4 is available in Debian stable.

    On most nodes I run this on I still use V3 as I’ve not gotten around to upgrading yet as it’s backing up critical systems.
    However, I’ve done tests using V4 and it’s a lot better performing than V3, as well as there being a script that can easily deploy V4

    Thanked by 2angstrom AlwaysSkint
  • @MeAtExampleDotCom said: Which of course highlights the vital follow-on question: how are you all testing your backups, and how often?!

    Important question here! Most of the time i need one file sometime (like more kind of versionning). If you know any tool to test i would be interested, i think a cronjob to just restore sometime is not the best. I did also some integrity tests.

    @angstrom said: Just curious: for those of you who use BackupPC, do you use v3 or v4?

    I've started on V3 maybe 7y ago, i've been on V4 until last year when i've moved to BorgBackup.

  • @o_be_one said: Most of the time i need one file sometime (like more kind of versionning). If you know any tool to test i would be interested, i think a cronjob to just restore sometime is not the best. I did also some integrity tests.

    Restic have this feature. You can mount restic repository. Then you can access/browse files by snapshot time/date, tag, host. Restic have a check feature. It will check if all the snapshots are ok.

    Kopia seems similar to restic.

    Thanked by 1o_be_one
  • @MeAtExampleDotCom said:

    @agonyzt said:
    … but when it's time to restore any file, good luck. I lost a 500GB+ repo …

    Which of course highlights the vital follow-on question: how are you all testing your backups, and how often?!

    A scheduled script that picks a file randomly from the data that is backed up, restores this file from the latest backup and compares it with the original. It's a five minute hack and really could need some more work, but it gets the job done for now.
    Sometimes it picks a file that has been changed since the latest backup which triggers the alarm and I get an email, but that just let me know that it works so I haven't bothered to fix it.

    Thanked by 1o_be_one
  • edited September 2021

    @o_be_one said:

    @MeAtExampleDotCom said: Which of course highlights the vital follow-on question: how are you all testing your backups, and how often?!

    Important question here! Most of the time i need one file sometime (like more kind of versionning). If you know any tool to test i would be interested

    My backups and tests are strung together manually using rsync+cron and similar, nothing by the way of specific tools.

    For automated testing of my mail and web servers I have small VMs running the same OS that restore from the latest backup daily - I check them occasionally to make sure they have the most recent things they should have (I could automate this bit but have never got around to it). I have simple port monitoring on them so if either is down for any length of time (for the mail server this usually means the restore failed and Zimbra didn't start properly because of that) I get a text.

    For current files I have a script that scans for all files not modified in the last day (so might not be on the latest daily backup image), sorts that list and runs sha256 over the list. I run that on the live data and the backups via cron once a week, if there is a difference between them I get a message (and I check manually occasionally for paranoia's sake). If your backup is large and on a storage VPS your provider may not like you keeping a CPU core busy for the length of time needed to checksum everything - use cpulimit to throttle down as needed (which indirectly limits IO throughput over large files too, useful if that is your pain point, though not so much IOPS when scanning many small files).

    For older snapshots, a similar checksum process occasionally runs and compares the results to the previous one rather than comparing to current data. It picks a random snapshot rather than scanning absolutely everything for all time.

    All a bit Heath Robinson (with some manual checks & interventions that could be better automated) but it works - it once caught corruption due to bad sectors on one of the copies, which SMART didn't warn about until the same time (the drive didn't notice until the dying part was read from, so there was nothing for SMART to report until then).

    Thanked by 1o_be_one
Sign In or Register to comment.