What does your backup setup look like?

ulayerulayer Hosting ProviderOG
edited December 2019 in Technical

Curious to see how everyone does their backups, this is how we do ours :smile:;

Currently we automate all of our backups using an Ansible role that manages our borgbackup server along with all of the borg clients. It adds/modifies scripts and crons on the clients (hypervisors) based on the variables we set and makes sure the clients can SSH into an unprivileged user and only access a specific directory as specified in .ssh/authorized_keys so it can push all of the data into a borg repo. Before it runs the borg portion of the script though, another tool on Proxmox (vzdump) backups up all of the VMs daily to the local disk and compresses them with pigz (multi-threaded gzip). Borg will then send all of the specified directories & files to our remote borg server for safe keeping.

I just added support for rclone so that after the all of the clients finish their backups for the day, it will do a weekly rclone sync on the borg server to our bucket on Wasabi object storage. I picked Wasabi, because they don't charge for egress (outgoing bandwidth). So in the event of a major disaster and we've lost our borg server, we could retrieve our borg repos from Wasabi.

Universal Layer LLC, a privacy conscious hosting provider
Check us out @ ulayer.net / twitter.com/ulayer_net

Comments

  • Well, you let the backup machine pull the backups, so it can't be sniped just in case.
    Whatever it is, cronjob making database dumps, backup server comes a few minutes later pulling out that dump and archiving it.

    Can be archived with rsync and cron, easy.

    Thanked by (1)ulayer
  • ulayerulayer Hosting ProviderOG
    edited December 2019

    @Neoon said:
    Well, you let the backup machine pull the backups, so it can't be sniped just in case.
    Whatever it is, cronjob making database dumps, backup server comes a few minutes later pulling out that dump and archiving it.

    Can be archived with rsync and cron, easy.

    Good point, although immutability can be complex in our situation. If we pull backups from the backup server we have to trust the backup server fully and it has to store all of the borg passphrases. With the setup we have now, each client has to trust itself. A client could, in theory run a borg prune and wipe out its own borg repo though (but it can't wipe out any others due to path restrictions). We might need to make a script to detect drastic decreases in size of a borg repo and notify us :smile:.

    Universal Layer LLC, a privacy conscious hosting provider
    Check us out @ ulayer.net / twitter.com/ulayer_net

  • @ulayer said:

    @Neoon said:
    Well, you let the backup machine pull the backups, so it can't be sniped just in case.
    Whatever it is, cronjob making database dumps, backup server comes a few minutes later pulling out that dump and archiving it.

    Can be archived with rsync and cron, easy.

    Good point, although immutability can be complex in our situation. If we pull backups from the backup server we have to trust the backup server fully and it has to store all of the borg passphrases. With the setup we have now, each client has to trust itself. A client could, in theory run a borg prune and wipe out its own borg repo though (but it can't wipe out any others due to path restrictions). We might need to make a script to detect drastic decreases in size of a borg repo and notify us :smile:.

    Of course, you give the backup server full root access.
    No you don't, it gets handed over in a under previlliged trade zone.

    No issues with that.
    I never thought about a client, its all about servers I do have access to.

  • ulayerulayer Hosting ProviderOG
    edited December 2019

    @Neoon said:

    @ulayer said:

    @Neoon said:
    Well, you let the backup machine pull the backups, so it can't be sniped just in case.
    Whatever it is, cronjob making database dumps, backup server comes a few minutes later pulling out that dump and archiving it.

    Can be archived with rsync and cron, easy.

    Good point, although immutability can be complex in our situation. If we pull backups from the backup server we have to trust the backup server fully and it has to store all of the borg passphrases. With the setup we have now, each client has to trust itself. A client could, in theory run a borg prune and wipe out its own borg repo though (but it can't wipe out any others due to path restrictions). We might need to make a script to detect drastic decreases in size of a borg repo and notify us :smile:.

    Of course, you give the backup server full root access.
    No you don't, it gets handed over in a under previlliged trade zone.

    No issues with that.
    I never thought about a client, its all about servers I do have access to.

    With borg you don't have to trust the borg server fully due to encryption. The clients don't have root access to the backup server either, nor do the backup servers have root access (or any) to the clients. All data is encrypted client side with whoever runs the borg create command (our hypervisors). Essentially why we don't have to trust Wasabi either, they don't have the keys to our backup cars :smile:.

    Universal Layer LLC, a privacy conscious hosting provider
    Check us out @ ulayer.net / twitter.com/ulayer_net

  • Every server is running Borg and is configured to use a single endpoint (storage box @ HostHatch). This storage box pushes the backups to my homeserver one hour after creation.

    Next, I have a slice + slab in Las Vegas that I use to manually pull the backup every once in a while. The reason for the manual pull is that if my data gets compromised or altered, it's not automatically synchronized to all other copies, i.e. I only sync manually if I know my latest backup is a healthy one.

  • Usually everything is done using borg with a few pre-scripts. Backup targets are BorgBase and and / or a storage KVM from InceptionHosting.

Sign In or Register to comment.

This Site is currently in maintenance mode.
Please check back here later.

→ Site Settings