Host backups
On erratic, we have a 2 x 4TB raid1 array on which we have installed a ZFS filesystem. This filesystem is mounted on backup.vm.nurd.space, and using a bunch of scripts, we manage backups. The architecture we use is a file-based pull-based one using rsync. On the backup server is a config file in which all hosts that are to be backed up are placed. Each day, cron will trigger running of a script, which will read in the config file, and make full backups of configured hosts. Cron will also trigger a script which will make a daily snapshot of the zfs filesystem for a month. For snapshots older then a month, all but the first of the month will be pruned.
Adding/removing hosts from backup
To add or remove a host from backup, modify the file /etc/backup.hosts. This file contains a comma-separated list of hosts to backup. The first column is the fqdn of the host which needs to be backed up. The second is the distro for this host. Right now, debian and openbsd are supported as hosts. Note that the host that is to be backed up needs to have a public key installed into the root account. This is included if you deploy the host as a ManagedVM.
Schedules
- /usr/local/sbin/run_backups.sh runs every day at 0500
- /usr/local/sbin/manage_snapshots.sh runs every day at 2355
Restores
If you need to restore a file, login to the backup server, and navigate to the /backup folder. In here, you will find a subdir for each host that is in the backup. Navigate to the subdir of this host, and find the file(s) you are looking for. If you need to restore a file from a snapshot, navigate to the /backup/.zfs/snapshot directory, select the snapshot you want, and navigate to the host directory underneath this snapshot dir.
Logging
The two scripts below will write their output to a logfile. These logfiles are stored under /var/log/run_backups.log and /var/log/manage_snapshots.log. These files will only contain the output of the *last* run.
Statistics
To get some idea of the consumption of the backup server, a statistics gathering script is deployed. On a daily basis, this will report on the total size, the top 10 of consumers and top 10 of consumers on the shared /home. The resulting report is stored under /backup/backup-stats.txt
Scripts
run_backups.sh
#!/bin/bash PATH='/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin' export PATH BACKUP_HOSTS="/etc/backup.hosts" KNOWN_HOSTS="/root/.ssh/known_hosts" BACKUP_SSH_KEY="/root/.ssh/id_ed25519_backup" function log { logger -t $(basename ${0}) ${@} echo "[$(date)] ${@}" } function do_backup { HOST="${1}" PLATFORM="${2}" grep -q "${HOST}" ${KNOWN_HOSTS} if [[ ${?} -eq 1 ]]; then log "fetching ssh host keys for ${HOST}" ssh-keyscan ${HOST} 2>/dev/null >> ${KNOWN_HOSTS} fi log "running backup for ${HOST}" # We have a couple of machines that have /home mounted from harmony # While it is okay to have one of these in the backup, we dont need # multiple copies, so create an exclude for all but the first user # of this share (egg.vm.nurd.space) EXCLUDE_HOME="" if [[ ${HOST} == "nurdservices.lan.nurd.space" ]]; then EXCLUDE_HOME="--exclude=/home" fi case "${PLATFORM}" in "debian") rsync -avpl \ -e "ssh -i ${BACKUP_SSH_KEY}" \ --exclude='/backup' \ --exclude='/music' \ --exclude='/mnt/mp3' \ --exclude='/mnt/pve' \ --exclude='/dev' \ --exclude='/proc' \ --exclude='/run' \ --exclude='/sys' \ --exclude='/tmp' \ --exclude='/lost+found' \ --exclude='/var/cache' \ --exclude='/var/crash' \ --exclude='/var/lock' \ --exclude='/var/run' \ --exclude='/var/spool' \ --exclude='/var/tmp' \ --exclude='/var/lib/containers' \ --exclude='/var/lib/podman' \ --exclude='/var/lib/libvirt/images' \ --exclude='/var/lib/kubelet/pods' \ --exclude='/var/lib/docker/containers' \ --exclude='/var/lib/docker' \ --exclude='/var/lib/rancher/k3s/agent/containerd' \ ${EXCLUDE_HOME} \ --delete-after \ ${HOST}:/ /backup/${HOST}/ ;; "openbsd") rsync -avpl \ -e "ssh -i ${BACKUP_SSH_KEY}" \ --exclude='/dev' \ --exclude='/tmp' \ --exclude='/var/cache' \ --exclude='/var/run' \ --exclude='/var/spool' \ --exclude='/var/sysmerge' \ --exclude='/var/syspatch' \ --exclude='/var/tmp' \ --delete-after \ ${HOST}:/ /backup/${HOST}/ ;; *) rsync -avpl \ -e "ssh -i ${BACKUP_SSH_KEY}" \ --exclude='/proc' \ --exclude='/sys' \ --exclude='/dev' \ --exclude='/run' \ --exclude='/var/run' \ --exclude='/tmp' \ --exclude='/var/tmp' \ --delete-after \ ${HOST}:/ /backup/${HOST}/ ;; esac } # Create backup of hosts on this site cat ${BACKUP_HOSTS} | while read DATA; do HOST="$(echo ${DATA} | cut -d, -f1)" PLATFORM="$(echo ${DATA} | cut -d, -f2)" do_backup "${HOST}" "${PLATFORM}" done
manage_snapshots.sh
#!/bin/bash PATH='/sbin:/usr/sbin:/usr/local/sbin:/bin:/usr/bin:/usr/local/bin' export PATH TODAY="$(date +%d-%m-%Y)" VOLUMES="$(zfs list | awk '/zbackup\//{print $1}')" function log { logger -t $(basename ${0}) ${@} echo "[$(date)] ${@}" } # Create snapshot for today for VOLUME in ${VOLUMES}; do SNAPSHOT_NAME="${VOLUME}@${TODAY}" zfs list -t snapshot 2>/dev/null | grep -q "${SNAPSHOT_NAME}" if [[ ${?} -eq 1 ]]; then log "creating snapshot ${SNAPSHOT_NAME}" zfs snapshot "${SNAPSHOT_NAME}" fi done # Cleanup last months snapshots DAYNUM=$(date +%d) if [[ ${DAYNUM} -eq 1 ]]; then MONTHNUM=$(date +%m) YEAR=$(date +%Y) LASTMONTH=$(expr ${MONTHNUM} - 1) if [[ ${MONTHNUM} -eq 1 ]]; then LASTMONTH=12 fi FILTER="$(printf "%02d-${YEAR}" ${LASTMONTH})" SNAPSHOTS=$(zfs list -t snapshot | awk "/${FILTER}/{print \$1}" | grep -v "01-${FILTER}") for SNAPSHOT_NAME in ${SNAPSHOTS}; do log "removing snapshot ${SNAPSHOT_NAME}" zfs destroy "${SNAPSHOT_NAME}" done fi
backup_stats.sh
#!/bin/bash BACKUP_POOL_SIZE_STATS="$(zpool list | grep zbackup)" IN_USE="$(echo "${BACKUP_POOL_SIZE_STATS}" | awk '{ print $3 }')" AVAIL="$(echo "${BACKUP_POOL_SIZE_STATS}" | awk '{ print $4 }')" echo -e "Backup statistics as of $(date)\n" echo "Total size: ${IN_USE}" echo "Available: ${AVAIL}" echo -e "\nTop 10 consumers of backup (size in MB)" cd /backup du -sm * | sort -rn | head -10 echo -e "\nTop 10 consumers of shared /home (size in MB)" cd /backup/egg.vm.nurd.space/home du -sm * | sort -rn | head -10