Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Low End Backups - How do you backup your small VPS? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Low End Backups - How do you backup your small VPS?

13»

Comments

  • eoleol Member

    Yeah I see.
    Might switch over aswell.

  • JerryHouJerryHou Member
    edited January 2019

    If you don't have much data, I remember there is an article about using MEGA as a backup media...

    EDIT: found it, here: http://www.matteomattei.com/backup-your-server-on-mega-co-nz-using-megatools/

  • eoleol Member

    Yeah I think they had like 50GB free at some point and fast enough.

    Thanked by 1JerryHou
  • uptimeuptime Member
    edited January 2019

    using MEGA as a backup media...

    One Weird Trick To Backup 50 GB - FBI Loves to Hate It!

    EDIT2:

    How has no one mentioned 1Fichier yet?

    Thanked by 1mfs
  • arhuearhue Member
    edited January 2019

    Ansible scripts which do backup and restore(saved on AWS S3). Incase anything is down, it will auto restore from backups. When I want to move VMs it will fetch from backups as well.

  • Backwhat?

    Thanked by 2eol mfs
  • eoleol Member

    @Wolf said:
    Backwhat?

    That thingy where you upload your stuff directly to the NSA.

    Thanked by 1Wolf
  • Daniel15Daniel15 Veteran
    edited January 2019

    eva2000 said: guide on using zstd for logrotate compression for nginx and php-fpm logs

    Instead of manually specifying the compresscmd, uncompresscmd and compressoptions for each logrotate config file, you can instead compile your own version of logrotate that uses zstd by default, by using the --with-compress-command, --with-decompress-command and --with-compress-extension arguments to ./configure :)

    Thanked by 2eol eva2000
  • eol said: That thingy where you upload your stuff directly to the NSA.

    I like to tell myself it's better for the environment to upload them straight to the NSA rather than them having to fetch it, which consumes a lot of energy.

    Regardless, having access to backups for agencies is a low prio, opposed to having access to live systems. And since backups are essentially just a copy of a live system there's no benefit in backups for them..

    Thanked by 1eol
  • aldothetrollaldothetroll Member
    edited January 2019

    I have google drive w/ unlimited storage, shared web hosting, manged server w/ FTP access, and a few unmanaged servers

    How should I setup my automated backup solution since I'm doing everything manually.

    I was thinking FTP to a server for backups and then sync it to GDrive

  • mfsmfs Banned, Member

    Since there's been some discussion about compression methods, I'd just add that borg can be instructed to compress only compressible data with your compression method of choice

    auto,C[,L]
      Use a built-in heuristic to decide per chunk whether to compress or not.
      The heuristic tries with lz4 whether the data is compressible.
      For incompressible data, it will not use compression (uses "none").
      For compressible data, it uses the given C[,L] compression - with C[,L]
      being any valid compression specifier
    

    Supported compression methods are: none, lz4,zstd, zlib, lzma(xz)
    I'm usually fine with auto,lzma,5 (default level for lzma would be "6")

    Thanked by 2eol eva2000
  • Daniel15 said: Instead of manually specifying the compresscmd, uncompresscmd and compressoptions for each logrotate config file, you can instead compile your own version of logrotate that uses zstd by default, by using the --with-compress-command, --with-decompress-command and --with-compress-extension arguments to ./configure

    indeed.. though not everyone would have zstd installed by default, so probably better via config file edits

    mfs said: Supported compression methods are: none, lz4,zstd, zlib, lzma(xz)

    nice that it supports zstd !

  • mfsmfs Banned, Member

    eva2000 said:

    nice that it supports zstd !

    if combined with the built-in heuristic it could arguably be picked with a little bit more "aggressive" compression levels; zstd has been introduced in ver. 1.1.4 (circa December 2017) and as the man page goes

    Archives compressed with zstd are not compatible with borg < 1.1.4
    

    so the user has to decide if compatibility with ancient borg versions is relevant or not. Luckily, it's always possible to

    --recompress allows to change the compression of existing data in archives.  Due to how Borg stores
    compressed size information  this might display incorrect information for archives that were not recreated
    at the same time.  There is no risk of data loss by this.
    
    Thanked by 2eva2000 uptime
  • I used rdiff-backup for years, but switched to borg some time ago.

    Borg can also incrementally backup block devices and still use deduplication. Very useful.

  • @TimboJones said:

    @sdfantini said:
    My goto backup strategy is:

    https://github.com/gilbertchen/duplicacy

    duplicacy every 12 hours. This tool can deduplicate, encrypt and compress.

    duplicacy runs also a pre-backup job to dump mysql

    duplicacy also out of the box suport backup to almost any cloud/remote storage (no need to use rclone).

    So i run 2 daily jobs that also backups up mysql (i can run any minute since it's deduplicated) and in real time i upload the backups to digital ocean spaces.

    Man, that was looking pretty awesome until I saw they charge $99/year for commercial use.

    It's only $20/year for the CLI license.

  • @hostnoob said:
    Dropbox and simple shell script

    update: just checked earlier and noticed dropbox doesn't really support linux as much any more, so now using MEGA

  • eoleol Member
    edited January 2019

    I used mega a few years ago.
    Worked pretty well using the cli-tools.
    Just "encrypt" it and it's probably fine.

    EDIT:
    *megatools.

  • @eva2000 said:

    Daniel15 said: Instead of manually specifying the compresscmd, uncompresscmd and compressoptions for each logrotate config file, you can instead compile your own version of logrotate that uses zstd by default, by using the --with-compress-command, --with-decompress-command and --with-compress-extension arguments to ./configure

    indeed.. though not everyone would have zstd installed by default, so probably better via config file edits

    I also forgot to mention that the btrfs file system supports automatic transparent zstd compression, so another option is to use btrfs, enable compression, and then just have logrotate rotate it with no compression (rely on the file system to do it for you).

    I think ZFS is adding zstd support too.

    Thanked by 1eva2000
  • eoleol Member

    Zstd is the future.
    And the future is now.

  • Daniel15 said: I also forgot to mention that the btrfs file system supports automatic transparent zstd compression, so another option is to use btrfs, enable compression, and then just have logrotate rotate it with no compression (rely on the file system to do it for you).

    yup https://btrfs.wiki.kernel.org/index.php/Compression

    I also updated my zstd benchmarks as well. I have since added pigz level 11 for zopfli compression and added all 32 levels of compression for zstd - from negative levels (-10 to -1) which focus on compression speed, normal levels 1 to 19 and then the 3 ultra levels 20 to 22 which focus on better compression ratios https://community.centminmod.com/threads/custom-tar-archiver-rpm-build-with-facebook-zstd-compression-support.16243/#post-70033

    Thanked by 1Daniel15
  • MrHMrH Member

    Cron + mysqldump + megatools

    Thanked by 1eol
  • by the grace of god and these two fingers

    Thanked by 1eol
  • jsgjsg Member, Resident Benchmarker

    I like zstd but I don't like the hype around it.

    The truth is that zstd is not "the best" compression/decompression - the same is true for any of the others.

    Compressing (typically already compressed) videos is very different from text/ascii and is very different from binary data blobs or exectutables. And thousands and thousands of small files is very different from a couple of large files. And one and the same compression/decompression can deliver quite different results depending on the processor. And tuning or pre-training for different file types (if available) can make a big difference too.

    What makes zstd the (quite probably) best allround mechanism is that it's quite configurable and that it delivers above average and usually even really good results with pretty much anything you throw at it.

    In other words: Using zstd is almost always (for any use case/file type) among the best, which means that it's ideal for "fire and forget", no need to analyze or fine tune; just throw zstd at it and be done.

    Thanked by 1eol
  • jsg said: And tuning or pre-training for different file types (if available) can make a big difference too.

    What makes zstd the (quite probably) best allround mechanism is that it's quite configurable and that it delivers above average and usually even really good results with pretty much anything you throw at it.

    In other words: Using zstd is almost always (for any use case/file type) among the best, which means that it's ideal for "fire and forget", no need to analyze or fine tune; just throw zstd at it and be done.

    yeah just look at the advanced options section for zstd so many extra paramters to tune for further speed or compression ratio efficiencies https://github.com/facebook/zstd/blob/dev/programs/zstd.1.md

    Thanked by 1eol
  • @jaden said:

    @TimboJones said:

    @sdfantini said:
    My goto backup strategy is:

    https://github.com/gilbertchen/duplicacy

    duplicacy every 12 hours. This tool can deduplicate, encrypt and compress.

    duplicacy runs also a pre-backup job to dump mysql

    duplicacy also out of the box suport backup to almost any cloud/remote storage (no need to use rclone).

    So i run 2 daily jobs that also backups up mysql (i can run any minute since it's deduplicated) and in real time i upload the backups to digital ocean spaces.

    Man, that was looking pretty awesome until I saw they charge $99/year for commercial use.

    It's only $20/year for the CLI license.

    Thanks, I must have been on the Vertical Backup page right before I posted.

  • @eva2000 said:
    Maybe cheap but not unlimited

    zstd low compression level still fast you can do level - 1 to default 3 and get speeds close to disk speed and still get decent compression ratio which helps if moving back up files over the network where network speed is limited i.e those 100-250Mbps network capped servers. So sending 1gb uncompressed or 90MB compressed over 100Mbps network.

    In my case - I have 48TiB of usable storage for 180 euro, which puts the price at 3.75 euro per terabyte. Sure I could compress stuff and save some space, but on the other hand, because files are stored in a raw format, it makes restores super easy, because it's literally just rsync stuff back to the source server, without anything.

    I do shared hosting, meaning working with a gazillion files, and even with rsync and compression disabled, I never actually utilize the gigabit link in the server, not because of lack of connectivity, but simply because the amount of files being handled, so the network generally isn't the problem, also not during restores even.

    zstd would be nice as a compression for rsync transfers if it's a lot faster to do compressions, because in cases where network would be a bottleneck, you could benefit from it during the transport time - currently I do rsync without compression, because the compression overhead actually causes the backups and/or restores to take longer time than without compression :-D

    So I might look at it for transport only - for storing on the system, I don't mind it being uncompressed, simply because the actual storage requirements aren't too big (30 daily backups and 6x monthly) currently takes up 22 terabyte of storage, which isn't too much.

  • Zerpy said: even with rsync and compression disabled, I never actually utilize the gigabit link in the server

    You might be better off with AES-NI accelerated ciphers on your SSH connection, and disabled compression

    rsync -aHAXv -e "ssh -T -o Compression=no -c aes128-ctr"
    

    BTW, LZ4 (same author as ZSTD) has the fastest decoder at the cost of compression ratios. https://lz4.github.io/lz4/

    Thanked by 1eol
  • @eva2000 said:

    Daniel15 said: By the way, I didn't know you're Aussie. I'm from Melbourne but am living in the USA now

    yup Brisbanite :)

    Me too! I'm on the north side (Caboolture) where are you? I'm excited to see a fellow Brisbanite here lol

    As for backup strategy, mysqldump + duplicity for each service or VPS depending on how it's set up.

Sign In or Register to comment.