Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Subscribe to our newsletter

Advertise on LowEndTalk.com

Latest LowEndBox Offers

    UrBackup review, or other self hosted solutions
    New on LowEndTalk? Please read our 'Community Rules' by clicking on it in the right menu!

    UrBackup review, or other self hosted solutions

    Anyone used UrBackup and able to provide some feedback (specifically, in a Linux/Unix environment)??

    Any other recommendations for GUI based self hosted backup solutions?

    Comments

    • I've been using it successfully for the last couple of years to back up a few Linux, Windows, and Mac systems. It's certainly not perfect, but it checked most of the boxes I wanted in a backup solution.

      The biggest advantage in my opinion is its deduplication, which helps reduce the total size of backups if you have duplicated content on different machines.

      UrBackup uses Linux hard links to handle the versioning, which is a bit of a mixed bag. On the one hand, it's nice to be able to access the files directly through the filesystem. On the other hand, this makes backups rather slow and vastly expands the number of files used on your system.

      The system requirements are rather high, I would say, especially if you have a lot of files. The UrBackup database needs to be stored on quick flash storage, or else routine operations can take forever. The database also rapidly expands in size based on the number of files and versions you're storing. Mine is sitting at around 50GB at the moment.

      I've got my UrBackup server running on a Hetzner VPS instance, which points to a dedicated server as the backup repository. This separation of application and storage seems to be a good way to go in my experience.

      If you have any more specific questions, just let me know.

    • @aj_potc said:
      I've been using it successfully for the last couple of years to back up a few Linux, Windows, and Mac systems. It's certainly not perfect, but it checked most of the boxes I wanted in a backup solution.

      The biggest advantage in my opinion is its deduplication, which helps reduce the total size of backups if you have duplicated content on different machines.

      UrBackup uses Linux hard links to handle the versioning, which is a bit of a mixed bag. On the one hand, it's nice to be able to access the files directly through the filesystem. On the other hand, this makes backups rather slow and vastly expands the number of files used on your system.

      The system requirements are rather high, I would say, especially if you have a lot of files. The UrBackup database needs to be stored on quick flash storage, or else routine operations can take forever. The database also rapidly expands in size based on the number of files and versions you're storing. Mine is sitting at around 50GB at the moment.

      I've got my UrBackup server running on a Hetzner VPS instance, which points to a dedicated server as the backup repository. This separation of application and storage seems to be a good way to go in my experience.

      If you have any more specific questions, just let me know.

      It seems to hit all of the marks for us too and it's open source, so that's always a huge plus.

      We'll give it a shot, much appreciated!!

    • rcxbrcxb Member
      edited January 6

      @PainlessHosting said:
      Anyone used UrBackup and able to provide some feedback (specifically, in a Linux/Unix environment)??

      I've been watching UrBackup for it's centrally-managed Windows backups, but Veeam Endpoint or Macrium Reflect are far superior free options if you don't have enough nodes to need centralized management (or open source), I don't see why you'd want to use UrBackup for Unix/Linux. There you should just use Borg, and if you need a GUI, there's several front-ends available:
      https://www.reddit.com/r/linux/comments/ahpdzk/borgbackup_frontends/

      UrBackup doesn't do any compression or deduplication on the client side, so you're sending a lot more data over the wire. On a LAN that's not a big deal, but across the internet that's a real problem. In fact it doesn't actually do that at all, instead depending on the file-system to do it, forcing you to use btrfs (or ZFS, but dedupe performance is terrible):
      https://www.urbackup.org/system_support.html

    • @rcxb said:

      UrBackup doesn't do any compression or deduplication on the client side, so you're sending a lot more data over the wire. On a LAN that's not a big deal, but across the internet that's a real problem. In fact it doesn't actually do that at all, instead depending on the file-system to do it, forcing you to use btrfs (or ZFS, but dedupe performance is terrible):
      https://www.urbackup.org/system_support.html

      This hasn't been my experience at all. I have a lot of duplicate data and have not observed my clients sending this data to the server if it already exists there. That's the reason why UrBackup makes such heavy use of its database: it stores the file hashes there, so it knows whether it's seen a certain file before. This causes a lot of IO on the server, and some load on the client while the hashes are calculated there, but it saves bandwidth because the clients don't have to send the files.

      I also can't confirm what you've written about deduplication requiring a special filesystem. I've used both ext4 and xfs with UrBackup, and neither of these filesystems has built-in deduplication features. Of course, if you use a deduplicating filesystem, then UrBackup can take advantage of this. But if you don't, then UrBackup falls back to its system of hard links to manage the versioning and deduplication.

    • I'm not too concerned about deduplication on the client side for bandwidth saving, just for saving storage capacity on the server side, which UrBackup support. However, upon further reading it doesn't look like UrBackup supports full image backups for Linux hosts, which may be a deal breaker.

      Borg looks interesting, but also looks like it'll be a project of its own just to get everything setup with the plugins and GUI, which also has me concerned about stability and maintenance/downtime in a production environment.

      Still looking and open to feedback. Willing to pay for a third party hosted solution, but only if it's cost effective (which most are not).

      Jungle disk with Amazon or Google storage on the back end is looking like a good, cheap solution. Has anyone used them and can provide feedback?

      Thanks guys!!

    • It's true that UrBackup is focused mainly on file-level backups. It's not the choice for Linux "bare metal recovery" needs. For redundancy, I like to use both a file-level and block-level backup strategy at the same time. I also recommend using more than one backup solution, preferably with totally different software and independent storage destinations. This helps reduce the possibility that a failure in one backup solution or component will take out your other backups.

      R1Soft is one solution that can do full image backups. I use this in production, along with replication. I pay for the licenses and host the backups on my own infrastructure (dedicated server and storage VPS). I find this cost effective, which hosted solutions for my amount of data would cost way too much.

      Veeam, which was already mentioned, can also do this, and I've used their free Linux agent successfully to do a test backup and bare metal recovery. But please be aware that this isn't trivial; depending on your partition/disk layout, RAID, and whether you're using LVM, you may find bare metal recovery to involve quite a lot of steps. Definitely try it and document the steps before you rely on it!

    • edited January 7

      @aj_potc said:
      For redundancy, I like to use both a file-level and block-level backup strategy at the same time. I also recommend using more than one backup solution, preferably with totally different software and independent storage destinations. This helps reduce the possibility that a failure in one backup solution or component will take out your other backups.

      That's the plan! Local file level backups per site with centralized full system backups is our goal.

      R1Soft is one solution that can do full image backups. I use this in production, along with replication. I pay for the licenses and host the backups on my own infrastructure (dedicated server and storage VPS). I find this cost effective, which hosted solutions for my amount of data would cost way too much.

      I've looked at R1, but their prices don't scale well and they look like they can get fairly expensive fairly quick. Maybe I over estimated my expected usage or misunderstood their pricing, I'll look again.

      Veeam, which was already mentioned, can also do this, and I've used their free Linux agent successfully to do a test backup and bare metal recovery. But please be aware that this isn't trivial; depending on your partition/disk layout, RAID, and whether you're using LVM, you may find bare metal recovery to involve quite a lot of steps. Definitely try it and document the steps before you rely on it!

      I'll look into this, thanks!

      I've also been looking at Cloudberry. They seems to have everything I need and the one time fee isn't too bad. Edit: Alas, Cloudberry doesn't support full image backups on Linux hosts (yet, just signed up for the beta program though). I may have to come up with a custom solution or use multiple products.

      There seems to be a huge gap in the market for a good open source backup/restore solution. Apache or similar need to get on this...

    • rcxbrcxb Member
      edited January 7

      @aj_potc said:
      This hasn't been my experience at all.

      I linked my sources. If I'm wrong, take it up with the UrBackup developers who said so.

      But if you don't, then UrBackup falls back to its system of hard links to manage the versioning and deduplication.

      I wouldn't call hard links "deduplication". It won't work well at all on big databases, VM disk images, etc, and even on a normal system you're wasting space. Most backup software got client-side block level deduplication 10 years ago.

      @PainlessHosting said:
      There seems to be a huge gap in the market for a good open source backup/restore solution. Apache or similar need to get on this...

      There's no shortage of open source backup software. Amanda, Backula, BackupPC, Clonezilla, UrBackup, borg, bup, restic, duplicity, DAR, rsync, and plenty more.

    • @rcxb said:
      There's no shortage of open source backup software. Amanda, Backula, BackupPC, Clonezilla, UrBackup, borg, bup, restic, duplicity, DAR, rsync, and plenty more.

      There doesn't seem to be single automated open source solution that provides both file and image backups for bare metal recovery and also supports *nix based systems. Each of those has their own specific use case and each have limitations either with a lack of automation, backup methods or supported systems. Maybe I'm just expecting to much from a single solution.

    • @rcxb said:

      @aj_potc said:
      This hasn't been my experience at all.

      I linked my sources. If I'm wrong, take it up with the UrBackup developers who said so.

      Yes, you're wrong, but only because you're misreading the documentation. I've used UrBackup in production for two years, so I speak with some confidence.

      It says at the very top of the page you've linked: "There are some considerations however if you want advanced features like compression or block level deduplication." The remaining parts of the document address those advanced features.

      The system supports file-based deduplication out of the box, and it doesn't depend on any specific filesystem to do this.

      I wouldn't call hard links "deduplication". It won't work well at all on big databases, VM disk images, etc, and even on a normal system you're wasting space. Most backup software got client-side block level deduplication 10 years ago.

      Call it whatever you like, but file-based deduplication is saving terabytes of space on my backup repository. In my book, that's already a big win. Of course you're correct that this doesn't help with databases or log files that change over time; but it's a huge help when you're doing Web hosting and have static file content that's duplicated in several locations.

      UrBackup has some big holes in its feature set. Block based deduplication would be useful, compression would be nice, and currently there's no way to encrypt your backups. It's not a perfect solution, but it's one of the better ones I've found that has a nice interface and is multiplatform. I don't know if a perfect open source solution exists.

    • aj_potcaj_potc Member
      edited January 7

      Since backups are a topic that are near and dear to me, I've thought a bit more about what I would consider a "perfect" backup solution.

      Here are the features or properties it should have, in no particular order:

      • Versioning. This is a basic requirement for any backup software. Anything else is just a mirror.
      • Compression. Basic compression helps to reduce the size of the backup.
      • Internal verification. Backups and any databases required by the backup software should be scanned periodically to look for signs of corruption. Bonus points if the software can repair and recover by itself.
      • Notifications and logging. Admins must be notified if backup jobs fail, and logs should be extensive for troubleshooting.
      • Graphical UI. Most configuration -- and all monitoring -- should be possible with a nice UI.
      • Deduplication. File and/or block-based deduplication is extremely helpful in reducing storage needs, especially if you're backing up many endpoints.
      • Support for replication. It should be possible to set up automatic replication so that you don't have to rely on a single backup destination. (R1Soft can do this; UrBackup can't.)
      • Good balance between speed and IO requirements. Backups and restorations need to be fast, but the backup agents have to be careful not to cause too much stress on each endpoint.
      • Multiplatform endpoint support. I want to be able to back up Linux, Windows, and Mac systems.
      • Encryption. The backup destination should have an option to enable encryption at rest. We can't always trust our offsite destinations.
      • Support for cloud storage. Allow backups to be stored on cloud providers such as AWS, Azure, Google Drive, Backblaze B2, etc., perhaps as a replication target.
      • Bare metal recovery. I want to be able to boot an ISO and, as easily as possible, perform a bare metal restore of a working system.
      • Open source. I'd prefer the solution to be an active open source project, but I'm not completely opposed to commercial solutions.

      Is there anything I've forgotten?

      And perhaps more importantly, which backup solutions come closest to fulfilling most of these wishes?

    Sign In or Register to comment.