Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Best way to backup files and MySQL DB?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Best way to backup files and MySQL DB?

hostnoobhostnoob Member
edited April 2014 in Help

I want to automatically tar /var/www/ and backup the DB (I can write a script to do these) but then where's the best place to store them? Dropbox/Box/GoogleDrive or a cheap VPS? Or something like OVH/AWS backup? I think I also saw a provider on here offer backup services for like $3/yr too but I forgot who.

The tar.gz file will be about 50mb or less, so 1gb should do for one backup a day (to be replaced every 2 weeks)

Comments

  • This is what I'm using in several Centos boxes. It takes about 43 seconds to backup a 2.X gig db and growing. :)

    mysqlhotcopy command --one line below

    [/home/username/]# /usr/bin/mysqlhotcopy -u root -p secret db_name /home/username/backups —allowold
    

    Rsync to remote server --One line below

    # rsync to myfavoritebackupserver.com
    [/home/username/]#/usr/bin/rsync -avzHx --delete --stats --progress --exclude-from '/home/username/rsync-exclude.txt' -e "ssh -2 -p 22"  /home/username/ [email protected]:/home/remote-user/backup
    
    Thanked by 2hostnoob GM2015
  • @hdpixel said:
    This is what I'm using in several Centos boxes. It takes about 43 seconds to backup a 2.X gig db and growing. :)

    mysqlhotcopy command --one line below

    > [/home/username/]# /usr/bin/mysqlhotcopy -u root -p secret db_name /home/username/backups —allowold
    > 

    Rsync to remote server --One line below

    > # rsync to myfavoritebackupserver.com
    > [/home/username/]#/usr/bin/rsync -avzHx --delete --stats --progress --exclude-from '/home/username/rsync-exclude.txt' -e "ssh -2 -p 22"  /home/username/ [email protected]:/home/remote-user/backup
    > 

    Thanks. Do you just back up to another VPS?

  • @hostnoob Yes. It can be any other vps.

    The next command makes back up to Amazon S3. It uses this tool, http://s3tools.org/s3cmd

    Back up to Amazon S3 --one line below

    [/home/username/]# s3cmd sync  --delete-removed --exclude-from /home/username/excludes.files /home/username/public_html/ s3://my-s3-bucket/backup/website1/public_html/
    
    Thanked by 1hostnoob
  • This is how we backup a forum with hundred thousands of members and nearly a million posts with 2 million attachment files and local avatars.

    We host attachment and avatar files in gluster distributed file system, and they come with native support for replications and geo-replica so if the cluster breaks we can switch to the backup copy immediately.

    The rest of the forum are mostly log files and source codes. They can be easily snapshot by a rsync command to a temporary directory, then either have a full tar, or do incremental tar, which tutorial can be found somewhere else.

    The database is running on a master-slave replication with an offshore VPS. Each time when I want to dump the database, I issue a STOP SLAVE command and then do a mysqldump and tar.

    We host our backup on backupsy and it's $5/mon which is perfect for a backup in this scale. If you are looking for something much smaller, you may think about split it and send to your mailbox. (GMail offers a lot of space)

    Thanked by 1JahAGR
  • @msg7086 said:
    This is how we backup a forum with hundred thousands of members and nearly a million posts with 2 million attachment files and local avatars.

    We host attachment and avatar files in gluster distributed file system, and they come with native support for replications and geo-replica so if the cluster breaks we can switch to the backup copy immediately.

    Is this commercial run forum software or custom made? I'm curious about how your application handles distributed data for users, if you don't mind sharing.

  • @daxterfellowes said:
    Is this commercial run forum software or custom made? I'm curious about how your application handles distributed data for users, if you don't mind sharing.

    Check glusterfs. Basically you can mount the file system as a directory and this won't break anything. Just make sure you don't do a lot of lookup operations in the mount point, such as execute php scripts with a lot of include.

    Thanked by 1daxterfellowes
  • @msg7086 said:
    Check glusterfs. Basically you can mount the file system as a directory and this won't break anything. Just make sure you don't do a lot of lookup operations in the mount point, such as execute php scripts with a lot of include.

    As in scandir type handles that read the drive? I assume since it's really just a masked remove drive.

    Thanks for the information. I'd like to incorporate this into mine, no matter how low scale it is. I'd probably only be serving user pictures and documents from here.

  • @daxterfellowes said:
    Thanks for the information. I'd like to incorporate this into mine, no matter how low scale it is. I'd probably only be serving user pictures and documents from here.

    It may take some time to understand its concepts before making good use it. Make sure you get used to it before using it in production service.

    Glad if this helps.

  • FalzoFalzo Member
    edited April 2014

    i recommend automysqlbackup and rdiffbackup...

    very worth another look: attic backup: https://attic-backup.org/
    uses deduplication, so even if you backup multiple servers they can take profit from it and those daily increments are very small...

    Original size Compressed size Deduplicated size
    This archive: 9.47 GB 6.15 GB 143.02 MB
    All archives: 421.26 GB 295.21 GB 22.29 GB

    Thanked by 1Spencer
  • Oh BTW you can also use Git/SVN for incremental backup.

  • tar does incremental backups with the -g option.

Sign In or Register to comment.