Daily backups to S3 with Restic and systemd timers
Restic is a modern backup program that can archive your files onto many different cloud and network storage locations, and to help restore your files in case of disaster. This article will show you how to backup one or more user directories to S3 cloud storage, using restic and systemd.
Choose an S3 vendor and create a bucket
Restic supports many different storage mechanisms, but this article and associated scripts will only focus on using S3 storage (AWS or S3 API compatible endpoint). You can choose from many different storage vendors: AWS, DigitalOcean, Backblaze, Wasabi, Minio etc.
You’ll need to gather the following information from your S3 provider:
S3_BUCKET
- the name of the S3 bucketS3_ENDPOINT
the domain name of the S3 serverS3_ACCESS_KEY_ID
The S3 access IDS3_SECRET_ACCESS_KEY
The S3 secret key
Example with Minio
Minio is an open-source self-hosted S3 server. You can easily install Minio on your docker server. See the instructions for d.rymcg.tech and then install minio.
Follow the instructions for creating a bucket, policy, and credentials
Example with Wasabi
Wasabi is an inexpensive cloud storage vendor with an S3 compatible API, and with a pricing and usage model perfect for backups.
- Create a wasabi account and log in to the console
- Click on
Buckets
in the menu, then clickCreate Bucket
. Choose a unique name for the bucket. Select the region, then clickCreate Bucket
. - Click on
Policies
in the menu, then clickCreate Policy
. Enter any name for the policy, but its easiest to name it the same thing as the bucket. Copy and paste the full policy document below into the policy form, replacingBUCKET_NAME
with your chosen bucket name (there are two instances to replace in the body).
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::BUCKET_NAME"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
}
]
}
-
Once the policy document is edited, click
Create Policy
. -
Click on
Users
in the menu, then clickCreate User
.- Enter any username you like, but its easiest to name the user the same as the bucket.
- Check the type of access as
Programatic
. - Click
Next
. - Skip the Groups screen.
- On the Policies page, click the dropdown called
Attach Policy To User
and find the name of the policy you created above. - Click
Next
. - Review and click
Create User.
- View the Access and Secret keys. Click
Copy Keys To Clipboard
. - Paste the keys into a temporary buffer in your editor to save them, you will need to copy them into the script that you download in the next section.
- You will need to know the S3 endpoint URLs for
wasabi
later, which are dependent on the Region you chose for the bucket. (eg.
s3.us-west-1.wasabisys.com
)
Download and edit the backup script
Install restic
with your package
manager.
Here is an all-in-one script that can setup and run your restic backups automatically on a daily basis, all from your user account (No root needed).
- Download the script from this direct link
- You can save it wherever you want, named whatever you want, but one
suggestion is to put it in
${HOME}/.config/restic_backup/
and rename it tomy_backup.sh
or another recognizable name for you. - Alternatively, you may copy and paste the entire script into a new file, as follows:
#!/bin/bash
### restic_backup.sh
### See the blog post: https://blog.rymcg.tech/blog/linux/restic_backup/
## Restic Backup Script for S3 cloud storage (and compatible APIs).
## Install the `restic` package with your package manager.
## Copy this script to any directory, and change the permissions:
## chmod 0700 restic_backup.sh
## Put all your configuration directly in this script.
## Consider creating an alias in your ~/.bashrc: alias backup=<path-to-this-script>
## Edit the variables below (especially the ones like change-me-change-me-change-me):
## WARNING: This will include plain text passwords for restic and S3
## SAVE A COPY of this configured script to a safe place in the case of disaster.
## Which local directories do you want to backup?
## Specify one or more directories inside this bash array (paths separated by space):
## Directories that don't exist will be skipped:
RESTIC_BACKUP_PATHS=(${HOME}/Documents ${HOME}/Music ${HOME}/Photos ${HOME}/Sync)
## Create a secure encryption passphrase for your restic data:
## WRITE THIS PASSWORD DOWN IN A SAFE PLACE:
RESTIC_PASSWORD=change-me-change-me-change-me
## Enter the bucket name, endpoint, and credentials:
S3_BUCKET=change-me-change-me-change-me
S3_ENDPOINT=s3.us-west-1.wasabisys.com
S3_ACCESS_KEY_ID=change-me-change-me-change-me
S3_SECRET_ACCESS_KEY=change-me-change-me-change-me
### How often do you want to backup? Use systemd timer OnCalander= notation:
### https://man.archlinux.org/man/systemd.time.7#CALENDAR_EVENTS
### (Backups may occur at a later time if the computer is turned off)
## Hourly on the hour:
# BACKUP_FREQUENCY='*-*-* *:00:00'
## Daily at 3:00 AM:
# BACKUP_FREQUENCY='*-*-* 03:00:00'
## Every 10 minutes:
# BACKUP_FREQUENCY='*-*-* *:0/10:00'
## Systemd also knows aliases like 'hourly', 'daily', 'weekly', 'monthly':
BACKUP_FREQUENCY=daily
## Restic data retention (prune) policy:
# https://restic.readthedocs.io/en/stable/060_forget.html#removing-snapshots-according-to-a-policy
RETENTION_DAYS=7
RETENTION_WEEKS=4
RETENTION_MONTHS=6
RETENTION_YEARS=3
### How often to prune the backups?
## Use systemd timer OnCalendar= notation
### https://man.archlinux.org/man/systemd.time.7#CALENDAR_EVENTS
PRUNE_FREQUENCY=monthly
## The tag to apply to all snapshots made by this script:
## (Default is to use the full command path name)
BACKUP_TAG=${BASH_SOURCE}
## These are the names and paths for the systemd services, you can leave these as-is probably:
BACKUP_NAME=restic_backup.${S3_ENDPOINT}-${S3_BUCKET}
BACKUP_SERVICE=${HOME}/.config/systemd/user/${BACKUP_NAME}.service
BACKUP_TIMER=${HOME}/.config/systemd/user/${BACKUP_NAME}.timer
PRUNE_NAME=restic_backup.prune.${S3_ENDPOINT}-${S3_BUCKET}
PRUNE_SERVICE=${HOME}/.config/systemd/user/${PRUNE_NAME}.service
PRUNE_TIMER=${HOME}/.config/systemd/user/${PRUNE_NAME}.timer
commands=(init now trigger forget prune enable disable status logs prune_logs snapshots restore help)
run_restic() {
export RESTIC_PASSWORD
export AWS_ACCESS_KEY_ID=${S3_ACCESS_KEY_ID}
export AWS_SECRET_ACCESS_KEY=${S3_SECRET_ACCESS_KEY}
(set -x; restic -v -r s3:https://${S3_ENDPOINT}/${S3_BUCKET} $@)
}
init() { # : Initialize restic repository
run_restic init
}
now() { # : Run backup now
## Test if running in a terminal and have enabled the backup service:
if [[ -t 0 ]] && [[ -f ${BACKUP_SERVICE} ]]; then
## Run by triggering the systemd unit, so everything gets logged:
trigger
## Not running interactive, or haven't run 'enable' yet, so run directly:
elif run_restic backup --tag ${BACKUP_TAG} ${RESTIC_BACKUP_PATHS[@]}; then
echo "Restic backup finished successfully."
else
echo "Restic backup failed!"
exit 1
fi
}
trigger() { # : Run backup now, by triggering the systemd service
(set -x; systemctl --user start ${BACKUP_NAME}.service)
echo "systemd is now running the backup job in the background. Check 'status' later."
}
prune() { # : Remove old snapshots from repository
run_restic prune
}
forget() { # : Apply the configured data retention policy to the backend
run_restic forget --tag ${BACKUP_TAG} --group-by "paths,tags" \
--keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS \
--keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
}
snapshots() { # : List all snapshots
run_restic snapshots
}
restore() { # [SNAPSHOT] [ROOT_PATH] : Restore data from snapshot (default 'latest')
SNAPSHOT=${1:-latest}; ROOT_PATH=${2:-/};
if test -d ${ROOT_PATH} && [[ ${ROOT_PATH} != "/" ]]; then
echo "ERROR: Non-root restore path already exists: ${ROOT_PATH}"
echo "Choose a non-existing directory name and try again. Exiting."
exit 1
fi
read -p "Are you sure you want to restore all data from snapshot '${SNAPSHOT}' (y/N)? " yes_no
if [[ ${yes_no,,} == "y" ]] || [[ ${yes_no,,} == "yes" ]]; then
run_restic restore -t ${ROOT_PATH} ${SNAPSHOT}
else
echo "Exiting." && exit 1
fi
}
enable() { # : Schedule backups by installing systemd timers
if loginctl show-user ${USER} | grep "Linger=no"; then
echo "User account does not allow systemd Linger."
echo "To enable lingering, run as root: loginctl enable-linger $USER"
echo "Then try running this command again."
exit 1
fi
mkdir -p $(dirname $BACKUP_SERVICE)
cat <<EOF > ${BACKUP_SERVICE}
[Unit]
Description=restic_backup $(realpath ${BASH_SOURCE})
After=network.target
Wants=network.target
[Service]
Type=oneshot
ExecStart=$(realpath ${BASH_SOURCE}) now
ExecStartPost=$(realpath ${BASH_SOURCE}) forget
EOF
cat <<EOF > ${BACKUP_TIMER}
[Unit]
Description=restic_backup $(realpath ${BASH_SOURCE}) daily backups
[Timer]
OnCalendar=${BACKUP_FREQUENCY}
Persistent=true
[Install]
WantedBy=timers.target
EOF
cat <<EOF > ${PRUNE_SERVICE}
[Unit]
Description=restic_backup prune $(realpath ${BASH_SOURCE})
After=network.target
Wants=network.target
[Service]
Type=oneshot
ExecStart=$(realpath ${BASH_SOURCE}) prune
EOF
cat <<EOF > ${PRUNE_TIMER}
[Unit]
Description=restic_backup $(realpath ${BASH_SOURCE}) monthly pruning
[Timer]
OnCalendar=${PRUNE_FREQUENCY}
Persistent=true
[Install]
WantedBy=timers.target
EOF
systemctl --user daemon-reload
systemctl --user enable --now ${BACKUP_NAME}.timer
systemctl --user enable --now ${PRUNE_NAME}.timer
systemctl --user status ${BACKUP_NAME} --no-pager
systemctl --user status ${PRUNE_NAME} --no-pager
echo "You can watch the logs with this command:"
echo " journalctl --user --unit ${BACKUP_NAME}"
}
disable() { # : Disable scheduled backups and remove systemd timers
systemctl --user disable --now ${BACKUP_NAME}.timer
systemctl --user disable --now ${PRUNE_NAME}.timer
rm -f ${BACKUP_SERVICE} ${BACKUP_TIMER} ${PRUNE_SERVICE} ${PRUNE_TIMER}
systemctl --user daemon-reload
}
status() { # : Show the last and next backup/prune times
BACKUP_NAME=restic_backup.${S3_ENDPOINT}-${S3_BUCKET}
PRUNE_NAME=restic_backup.prune.${S3_ENDPOINT}-${S3_BUCKET}
echo "Restic backup paths: (${RESTIC_BACKUP_PATHS[@]})"
echo "Restic S3 endpoint/bucket: ${S3_ENDPOINT}/${S3_BUCKET}"
journalctl --user --unit ${BACKUP_NAME} --since yesterday | \
grep -E "(Restic backup finished successfully|Restic backup failed)" | \
sort | awk '{ gsub("Restic backup finished successfully", "\033[1;33m&\033[0m");
gsub("Restic backup failed", "\033[1;31m&\033[0m"); print }'
echo "Run the 'logs' subcommand for more information."
(set -x; systemctl --user list-timers ${BACKUP_NAME} ${PRUNE_NAME} --no-pager)
run_restic stats
}
logs() { # : Show recent service logs
set -x
journalctl --user --unit ${BACKUP_NAME} --since yesterday
}
prune_logs() { # : Show prune logs
set -x
journalctl --user --unit ${PRUNE_NAME}
}
help() { # : Show this help
echo "## restic_backup.sh Help:"
echo -e "# subcommand [ARG1] [ARG2]\t# Help Description" | expand -t35
for cmd in "${commands[@]}"; do
annotation=$(grep -E "^${cmd}\(\) { # " ${BASH_SOURCE} | sed "s/^${cmd}() { # \(.*\)/\1/")
args=$(echo ${annotation} | cut -d ":" -f1)
description=$(echo ${annotation} | cut -d ":" -f2)
echo -e "${cmd} ${args}\t# ${description} " | expand -t35
done
}
main() {
if [[ $(stat -c "%a" ${BASH_SOURCE}) != "700" ]]; then
echo "Incorrect permissions on script. Run: "
echo " chmod 0700 $(realpath ${BASH_SOURCE})"
exit 1
fi
if ! which restic >/dev/null; then
echo "You need to install restic." && exit 1
fi
if test $# = 0; then
help
else
CMD=$1; shift;
if [[ " ${commands[*]} " =~ " ${CMD} " ]]; then
${CMD} $@
else
echo "Unknown command: ${CMD}" && exit 1
fi
fi
}
main $@
- Review and edit all of the variables at the top of the file, and save the file.
- Change the permissions on the file to be executable and private:
chmod 0700 ${HOME}/.config/restic_backup/my_backup.sh
- Consider saving a copy of the final script in your password manager, you will need this to recover your files in the event of a disaster.
Usage
- To make using the script easier, create this BASH alias in your
~/.bashrc
:
## 'backup' is an alias to the full path of my personal backup script:
alias backup=${HOME}/.config/restic_backup/my_backup.sh
- Restart the shell or close/reopen your terminal.
- Run the script alias, to see the help screen:
backup
## restic_backup.sh Help:
# subcommand [ARG1] [ARG2] # Help Description
init # Initialize restic repository in ${RESTIC_BACKUP_PATH}
now # Run backup now
forget # Apply the configured data retention policy to the backend
prune # Remove old snapshots from repository
enable # Schedule backups by installing systemd timers
disable # Disable scheduled backups and remove systemd timers
status # Show the last and next backup/prune times
logs # Show recent service logs
snapshots # List all snapshots
restore [SNAPSHOT] [ROOT_PATH] # Restore data from snapshot (default
help # Show this help
Initialize the restic repository
backup init
Run the first backup manually
backup now
Install the systemd service
This will schedule the backup to automatically run daily
backup enable
Check the status
This will show you the last time and the next time that the timers will run the backup job:
backup status
List snapshots
backup snapshots
Restore from the latest snapshot
## WARNING: this will reset your files to the backed up versions!
backup restore
Restore from a different snapshot
This will restore the snapshot (xxxxxx
) to an alternative directory (~/copy
):
backup restore xxxxxx ~/copy
Prune the repository
This will clean up storage space, and delete old snapshots that are past the time of your data retention policy. (This is scheduled to be run automatically once a month)
backup prune
Security considerations
Be sure not to share your edited script with anyone else, because it now contains your Restic password and S3 credentials!
The script has permissions of 0700
(-rwx------
) so only your user account
(and root
) can read the configuraton. However, this also means that any other
program your user runs can potentially read this file.
To limit the possiblity of leaking the passwords, you may consider running this script in a new user account dedicated to backups. You will also need to take care that this second user has the correct permissions to read all of the files that are to be backed up.
Systemd timers are way better than cron
The backups timers are set to run OnCalendar=daily, which means to run every single day at midnight. But what if you’re running backups on a laptop, and your laptop wasn’t turned on at midnight? Well that’s what Persistent=true is for. Persistent timers remember when they last ran, and if your laptop turns on and finds that it is past due for running one of the timers it will run it immediately. So you’ll never miss a scheduled backup just because you were offline.
Frequently s/asked/expected/ questions
How do I know its working?
I hope this script will be reliable for you, but I make no guarantees. You
should check backup status
and backup logs
regularly to make sure it’s still
working and stable for you in the long term. It might be nice if this script
would email you if there were an error, but this has not been implemented yet.
You should play a mock-disaster scenario: use a second computer and test that your backup copy of your backup script works (You did save a backup of your script in your password manager, right??):
## After copying the script onto a second computer ....
## Test restoring and copying all backed up files into a new directory:
chmod 0700 ./my_backup.sh
./my_backup.sh restore latest ~/restored-files
Now you should see all your backed up files in ~/restored-files
, if you do,
you now have evidence that the backup and restore procedures are working.
I lost my whole computer, how do I get my files back?
Copy your backup script (the one you saved in your password manager) to any
computer, and run the restore
command:
## After installing your backup script and BASH alias onto a new computer ....
## Restore all files to the same directories they were in before:
backup restore
Can I move or rename the script?
Yes, you can name it whatever you like, and save it in any directory. But there’s some things you need to know about moving it later:
- The full path of the script is used as the restic backup tag. (Shown via
backup snapshots
) - This tag is an identifier, so that you can differentiate between backups made by this script vs. backups made by running the restic command manually.
- If you change the path of the script, you will change the backup tag going forward.
- Make sure you update your BASH alias to the new path.
- The full path of the script is written to the systemd service file, so if you change the name or the path, you need to re-enable the service:
## Reinstall the systemd services after changing the script path:
backup enable
Can I move my backups to a new bucket name or endpoint?
Yes, after copying your bucket data to the new endpoint/name, you will also need to disable and then re-enable the systemd timers:
- The name of the systemd service and timer is based upon the bucket name and
the S3 endpoint, from the
BACKUP_NAME
variable which is set torestic_backup.${S3_ENDPOINT}-${S3_BUCKET}
by default, so by changing either of these variables necessitates changing the name of the systemd service and timers. - Note: if you only need to change the S3 access or secret keys, but the bucket and endpoint stay the same, there’s no need to do anything besides editing the script.
Before making the change, disable the existing timers:
backup disable
Now edit your script to account for the updated bucket name and/or endpoint.
After making the change, re-enable the timers:
backup enable
Check the status:
backup status
Why suggest the path ~/.config/restic_backup/my_backup.sh?
~/.config
is the default XDG Base Directory which is defined asWhere user-specific configurations should be written (analogous to /etc).
- Normally, scripts wouldn’t go into
~/.config
(nor/etc
), but this script is a hybrid config file and program script, so it counts as a config file. - Each project makes its own subdirectory in
~/.config
, using the project name, eg.restic_backup
. By creating a sub-directory, this allows you to save (and use) more than one backup script. (Note: to do so, you would need to create an additional BASH alias with a different name.) my_backup.sh
implies that the script contains personal information and should not be shared. Both of which are true!- If you share your
~/.config
publicly (some people I’ve seen share this entire directory on GitHub), you should choose a different path for your script! - The name and path of the script does not functionally matter.
You can discuss this blog on Matrix (Element): #blog-rymcg-tech:enigmacurry.com
This blog is copyright EnigmaCurry and dual-licensed CC-BY-SA and MIT. The source is on github: enigmacurry/blog.rymcg.tech and PRs are welcome. ❤️