Arch Linux based File Server, Borg Backup

The Arch Wiki has several options for synchronization and backup packages. After reviewing all of these options, I settled on Borg Backup, which is in the official Arch Community repository as the borg package. It is written in Python and C(Cython), and has a sizable developer community (the "Borg Collective"), so I'm reasonably confident it will be supported for years to come. It uses SSH as the encrypted network transport mechanism, so I could leverage my knowledge of the SSH protocol to create secure channels to back up my various devices.

Borg comes with many nice features, including incremental backups (storing only what has changed in the Borg repository on the server filesystem), data encryption in transit and at rest, compression using several standard, performant compression algorithms, and deduplication. For instance, on my laptop (named ferrum, the Latin name for iron), my /home partition has over 5T of data stored in the Borg repositories (while only having a 512GiB NVMe SSD disk), but due to compression and deduplication roughly 200G on disk.

Borg uses a client/server model, and can optionally encrypt data at rest. I chose for the clients to use a "push" model, where they initiate a connection to the server over SSH, and sign the data with authentication. After I reviewed the borg-init manual well after I had established the client repositories, I realized that --encryption=authenticated does not encrypt the data. This option does make tampering evident, but does not provide protection from attackers reading it. Only use this option with servers that you trust completely (which is the case for tennessine,, but not for my Backblaze B2 storage). Only clients with the password can mount and restore these data. Unfortunately Borg does not yet have the capability to change the encryption mode after the repositories are established.

I have created a new, encrypted repository for my ferrum laptop, which I intend to replicate for each client, using a passphrase and a key file. I did try a test for intializing the repositories with encryption as a non-root user, but I ran into issues having this non-root user read sensitive files. Too many permission denied errors in the log, even after trying to grant this user read access through POSIX ACLs. If some files are readable by anyone else but the owner, OpenSSH refuses to load the private keys, for both host and client keys. So, I started over, initializing the repository with key file encryption, and using a password manager to store the password and key.

Initialization

Server setup

First, I created the Btrfs subvolume mounted on /data/backup/borg/encrypted (see the Btrfs article). Then, I installed the borg package. Next, on tennessine I created the system user backup, and changed ownership of /data/backup/borg/encrypted to this system user:

useradd --groups disk --system --create-home backup
chown --recursive backup: /data/backup/borg

I created the home directory because I will need the /home/backup/.ssh/authorized_keys file (more on this later on below).

Next, for my remote systems, I needed to create a reverse SSH tunnel from tennessine, so the remote systems don't need to be allowed through my home firewall in order to initialize and create their Borg repositories and archives. Remember, this is for remote hosts, not on my home network. If you do not have any remote hosts to back up, you can skip this SSH setup.

This will allow the clients to connect to the loopback address (127.0.0.1 or localhost) which will then be tunneled over SSH to tennessine. I created two scripts for this, one that resides on tennessine, and one that lives on the remote client system. On tennessine, I stored the script in /usr/local/sbin/<hostname>.sh:

hostname=$(basename ${0} .sh)
if ! ssh ${hostname} '/usr/local/sbin/tennessine.sh'; then
    # the connection is not up, start it
    exec ssh -NR 1492:localhost:22 ${hostname}
    _ret=$?
    if [[ ${_ret} -ne 0 ]]; then
        echo "Tunnel setup to ${hostname} failed!"
        exit ${_ret}
    fi
fi

The local port (in this case 1492) is arbitrary, you'll want to pick a port that no other service uses on the client (otherwise the reverse tunnel setup will fail). If the hostname is not a fully qualified domain name (FQDN), you will need to create an entry in ~/.ssh/config for the user that runs borg init and borg create (I run it as root in a systemd service, so this is in /root/.ssh/config):

Host <hostname>
    Hostname <fqdn or IP address>
    Port <port if not SSH (TCP/22)>

The other contents of ~/.ssh/config should remain intact, simply add this stanza. The script above also calls /usr/local/sbin/tennessine.sh on the remote host. More on this in the client setup below.

That's about it for the setup on tennessine, much of Borg is initiated and processed from the clients (in the way I'm using it).

Client setup

SSH key setup

On each client, I created ed25519 public and private key pair for the root user:

ssh-keygen -t ed25519 -b 256

I did this for the root user since I planned to back up /etc, /root, and /home on each client. Next, I copied the id_ed25519.pub public key to /home/backup/.ssh/authorized_keys on the tennessine. In order to limit what the clients can do through this connection, I set up each entry as follows:

command="cd /data/backup/borg/encrypted/ferrum.ceti; borg serve --restrict-to-path /data/backup/borg/encrypted/ferrum.ceti",restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKAHyIa5lLuVH6pmpVfdUAJ3XRzNBu5Fowglruf9QWwf root@ferrum

Replace ferrum.ceti and /data/backup/borg/encrypted/, along with the SSH key type and public key for your user (in my case, root@ferrum). See the sshd manual page and Borg documentation for details on the syntax used here. Note that the backup user runs the borg serve command remotely. No service or daemon is constantly listening for connections, so it's simpler to secure. The clients will call borg init and borg create across the SSH connection to store data to disk on tennessine.

The final pieces on my remote systems involve the reverse SSH tunnel setup (see the information in the server setup above). There are two files I need to create or edit. The first is /usr/local/sbin/tennessine.sh:

#!/usr/bin/env bash


if ! ss -plaunt | grep '127.0.0.1:1492' | grep LISTEN | grep -Po 'pid=\d+' | grep -Poq '\d+'; then
    # if we're here, tunnel isn't up
    exit 1
else
    exit 0
fi

All this script does is test whether the tunnel is up (change port 1492 to whichever port you chose in the server setup above), and set the exit status accordingly (nonzero exit status indicates failure, in the UNIX tradition). This is actually run from tennessine to test whether the server needs to reestablish the tunnel.

The next file that needs to be created/edited is ~/.ssh/config (for the user running borg init and borg create) where we define that the hostname tennessine should refer to localhost (the tunnel):

Host tennessine
    Hostname 127.0.0.1
    Port 1492

As always, there may be other contents of ~/.ssh/config. Those should be left intact, merely add the above stanza. Alternatively, you could set an entry in /etc/hosts, but then in all the Borg repository base URLs you'd need to include the port (1492).

Borg init

Before I could create my first archive, I needed to initialize the Borg repositories, as root. I did so on each client, using the following command:

export BORG_PASSPHRASE='RandomlyGeneratedPassword123#'
borg init --encryption=keyfile-blake2 backup@tennessine:repo
unset BORG_PASSPRASE

You have the option to use either the Blake2 or SHA256 algorithm for the Hash/MAC. Since all of my Intel CPU client systems are a bit older, and do not have the sha_ni CPU flag (for SHA hardware acceleration), I will choose Blake2. This will create the key file in ~/.config/borg/keys, which I will need to back up separately (to Git and my password manager). If you don't want a key file, you can use the repokey options, which will use the passphrase to encrypt the repositories. See the borg-init manual for specific details.

If you follow this example, with a key file (regardless of the hash/digest/MAC algorithm), you'll want to store your key in a safe place. You can export them right after initialization with the following commands:

borg key export --qr-html backup@tennessine:repo \
    > root@ferrum_keyfile_qr.html
borg key export --paper backup@tennessine:repo \
    > root@ferrum_keyfile_paper.txt

You only need to do one or the other, but I did both. The HTML version presents a QR code where you can scan and download the key to your mobile phone. The paper key is plain text, which you will likely need to type or paste in to recover the key. I stored both as separate secure notes in my cloud-based password manager, as well as to a private Git repository.

A bit about my password manager. I had used LastPass for several years, but decided to abandon them when their CLI tool, lpass had not been updated in three years. Also, in August 2022, one of their developer accounts was breached. I have familiatity with 1Password, but since my employer uses it I'd want to switch to something that won't conflict. I settled on BitWarden, which appears to have a fully-featured CLI tool (bw). I want a cloud password manager, in case I lose this laptop, my phone, and everything else in my life I want to be able to bootstrap and retrieve my backed up files (Git and Backblaze B2) with only a few hours' effort or less.

In order to integrate with Bitwarden CLI, I followed the Systemd CREDENTIALS setup, which is available in the latest version of systemd (Arch is on systemd 251.4 as of this writing). Unfortunately bw unlock does not currently accept the API key (but bw login does), so I use systemd-creds to encrypt my Bitwarden master password. It is strongly recommended to have at least /var/lib/systemd on an encrypted block device (e.g., LUKS encrypted block device), and even better if you have a FIDO2 or TPM device. First, I temporarily saved my master password to a ramdisk/tmpfs filesystem (/run or /tmp are good candidates for this). In the following sections I will show how I integrated Borg using systemd and Bitwarden.

Archive Creation

Once both the server and clients have been set up, I was ready to call borg create from the clients to create my first archive. Since most of my clients also run Btrfs, I included logic to ensure the latest Btrfs snapshots are automatically mounted, which my /usr/local/sbin/borg.sh script ensures is the case, then creates the archive in the initialized repositories on tennessine. For the first part, I created an override file for the snapper-timeline.service systemd unit file, into its own script (also to be run as root), More on this below. The root user will merely call borg create after ensuring the snapshot is mounted.

Here is the main borg.sh, which uses the bw BitWarden CLI tool to retrieve the repository password:

#!/usr/bin/env zsh

#set -x
_res=0
_err=()

home_dir="/mnt/snapshots/home"
root_dir="/mnt/snapshots/root"
BORG_REPO="backup@tennessine:repo"
export BW_ITEM='01234567-89ab-cdef-0123-456789abcdef'
export HOME=/root

if BW_SESSION=$(bw unlock --passwordfile "$BWPATH" --raw); then
    if BORG_PASSPHRASE="$(bw --session ${BW_SESSION} get password ${BW_ITEM})"; then
        export BORG_PASSPHRASE # so borg can see it

        # Test that latest snapshots are mounted
        if ! grep -q ${home_dir} /proc/mounts; then
            _res=$(( $? + ${_res} ))
            _err+="/home snapshot not mounted!"
        fi
        
        if ! grep -q ${root_dir} /proc/mounts; then
            _res=$(( $? + ${_res} ))
            _err+="/ snapshot not mounted!"
        fi
        
        if [[ ${_res} -gt 0 ]]; then  # one of the above commands failed
            echo "Not all snapshots mounted!  Return code = ${_res}; Error array:  ${_err[@]}"
            exit ${_res}
        else
            # backup everything
            if ! borg create --verbose --stats ${BORG_REPO}::{now:%Y-%m-%d_%H%M%S} /mnt/snapshots/{root/{etc,root},home}; then
                _res=$(( $? + ${_res} ))
                _err+="backup /etc, /root, and /home snapshots"
            else
                # Check the archive just created
                ## Get the last archive in the repo
                last_archive=$(borg list ${BORG_REPO} | tail -n 1 | awk '{print $1}')
                ## Check the archive
                if ! borg check --verbose ${BORG_REPO}::${last_archive}; then
                    _res=$(( $? + ${_res} ))
                    _err+="check last encrypted archive failed"
                fi
            fi
        fi
    else
        _res=$(( ${_res} + ${?} ))
        _err+="Could not retrieve Borg passphrase from Bitwarden!  Exiting..."
    fi
else
    _res=$(( ${_res} + ${?} ))
    _err="Could not unlock Bitwarden!  Exiting..."
fi

if bw --quiet unlock --check; then
    if ! /usr/bin/bw --quiet lock; then
        _res=$(( ${_res} + ${?} ))
        _err="Could not lock Bitwarden!"
    fi
fi

unset BORG_PASSPHRASE
unset BW_SESSION

if [[ ${_res} -gt 0 ]]; then  # one of the above commands failed
    echo "Borg Backup failed!  Return code = ${_res}; Error array: "
    for err in ${_err[@]}; do
        echo ${err}
    done
    exit ${_res}
fi
#set +x

This script requires special setup in the borg.service systemd unit file. The borg package in Arch Linux does not provide this service unit file, I've had to create it from scratch:

# /etc/systemd/system/borg.service
[Unit]
Wants=network.target
After=network-online.target

[Service]
Type=oneshot
SetCredentialEncrypted=bwp: \
        k6iUCUh0RJCQyvL8k8q1UyAAAAABAAAADAAAABAAAADdjdKwl13Dx6ZQpc4AAAAAgAAAA \
        AAAAAALACMA0AAAACAAAAAAfgAgQAX4TESIFNnpBrX8dokmypaE97P45bnqpVu2TtFrbN \
        sAEHAFkdzz0LjTlbQnmZDl7VJtrTgPoj0caf617TNILFJd6Vcuv9vrsLUIoCCO6RFa9sm \
        cuE8j+w3DvbclrN/shw/4+2HqR+53ICfxV8t3WcljgyCC/m5Zd615IABOAAgACwAAABIA \
        ID8H3RbsT7rIBH02CIgm/Gv1ukSXO3DMHmVQkDG0wEciABAAIK/tMGsd28TZZYzhQLpMn \
        wfyk9l6kROmPYI2s46im71aPwfdFuxPusgEfTYIiCb8a/W6RJc7cMweZVCQMbTARyIAAA \
        AAsLX5FIl2Q09hFFQ/PlgkPSRLS99eizTqg05nlblvYDkaLKhaXvZwll1417fdT0KaxVF \
        duc8QG07HwlPTtmdNVd0beWAwnFzB
Environment=BWPATH=%d/bwp
ExecCondition=/usr/bin/bash -xc 'systemctl is-active borg-verify.service && exit 1 || exit 0'
ExecStart=/usr/local/sbin/borg.sh

The SetCredentialEncrypted= contents were created by putting my Bitwarden master password into a plaintext file (on retrospect I should have put this on a ramdisk filesystem like /tmp). Then I ran the following command as root:

# systemd-creds encrypt --pretty --name=bwp /tmp/bwmaster.pass -

I then copied the output of this command verbatim into the systemd service unit file, borg.service. Once that was safely stored in the unit file, I securely deleted the file from disk:

# shred -uz /tmp/bwmaster.pass

See the systemd.exec manual file, as well as the Systemd CREDENTIALS documentation for further details on how I set this up.

As for the mounting of the latest snapshot, I created mount-latest-snapshot.sh which is run as an ExecStartPost=/usr/local/sbin/mount-latest-snapshot.sh, in an override for the package-provided snapper-timeline.service (sudo systemctl edit snapper-timeline.service). Here are the contents of that script:

#!/usr/bin/env bash

#set -x
# Set Btrfs subvolume IDs
root_snapshots_subvol_id="361"
home_snapshots_subvol_id="362"
latest_root_snapshot_id=$(sudo btrfs subvolume list -p / | grep "parent ${root_snapshots_subvol_id}" | tail -n 1 | grep -Po '^ID \d+' | tr -d '[ID ]')
latest_home_snapshot_id=$(sudo btrfs subvolume list -p / | grep "parent ${home_snapshots_subvol_id}" | tail -n 1 | grep -Po '^ID \d+' | tr -d '[ID ]')

# Set up variables
btrfs_disk="/dev/mapper/luks"
home_dir="/mnt/snapshots/home"
root_dir="/mnt/snapshots/root"
last_root_snapshot=$(ls -tr /.snapshots | tail -n 1)
last_home_snapshot=$(ls -tr /home/.snapshots | tail -n 1)

_ret=0
_err=()


# unmount previous root snapshot
if grep -q "${root_dir}" /proc/mounts; then
    if ! umount "${root_dir}"; then
        _ret=$(( ${_ret} + ${?} ))
        _err+="Could not unmount source directory: ${root_dir}!"
    fi
fi

# mount latest root snapshot
if ! mount -o ro,compress=zstd,subvolid=${latest_root_snapshot_id} "${btrfs_disk}" "${root_dir}"; then
    _ret=$(( ${_ret} + ${?} ))
    _err+="Could not mount latest snapshot to ${root_dir}!"
    exit ${_ret}
fi


# unmount last home snapshot
if grep -q "${home_dir}" /proc/mounts; then
    if ! umount "${home_dir}"; then
        _ret=$(( ${_ret} + ${?} ))
        _err+="Could not unmount source directory: ${home_dir}!"
    fi
fi

# mount latest home snapshot
if ! mount -o ro,compress=zstd,subvolid=${latest_home_snapshot_id} "${btrfs_disk}" "${home_dir}"; then
    _ret=$(( ${_ret} + ${?} ))
    _err+="Could not mount latest snapshot to ${home_dir}!"
fi

if [[ ${_ret} -ne 0 ]]; then
    # something above failed, exit with nonzero status
    echo "Mounting latest snapshots failed!  Return code:  ${_ret}, Error array:  ${_err[@]}" >&2
    exit ${_ret}
fi

# If we're here, latest root and home snapshots have been mounted
#set +x

My intention is to replicate this setup on all of my hosts, not just ferrum. Now that I have all files from ferrum writing into the same repository, I can reproduce this on all other clients. Modification on other clients is definitely necessary, as not all of them have /home or the root partition / on Btrfs subvolumes. Also, for tennessine I set it up as a client, going through the backup@tennessin:repo repository, so I can recover /etc, /root, and /home from cloud backup should tennessine completely fail.

Restoring Files

Direct From Borg

This process should be followed to restore files if the Borg repository (in my case, on tennessine) is intact, meaning the administrator has not restored the repositories on the file server disk from cloud backup, or from any other location other than where the repositories were initialized and archives created. This means the information in ~/.config/borg is up to date for the user that created the repositories and archives, and no manipulation of this configuration directory is necessary. Note, all of these options assume the BORG_PASSPHRASE environment variable contains the repository passphrase (possibly from the appropriate password manager), and you've got any encryption keys imported to the local Borg configuration.

There are basically two options, extract or mount. The former uses the following syntax, and stores the specified archive in the current directory. Be sure to change directory (cd) to the location before running borg extract! borg currently doesn't allow specifying an alternate destination directory!:

borg extract backup@tennessine:repo::<ARCHIVE> [PATH [PATH [...]]]

You can use borg list backup@tennessine:repo to list the available archives. You can optionally provide paths for specific files/directories to restore. Alternatively, you can mount the repository as a FUSE filesystem (need the package python-llfuse installed) to interactively explore the repository (this can be slow, as the client will usually need to cache the directory structure locally, however transparently):

borg mount backup@tennessine:repo /path/to/mount/point

From Cloud Backup

This should only be necessary if something catastrophic happens, resulting in complete loss or failure of the file server (in my case, tennessine). This could include complete failure of all hard drives in the system, or some catastrophic event (e.g., flood or fire, or even petty theft) destroying the file server. You may also want to execute this as a disaster recovery exercise (DRE), which is exactly what I did to make sure my cloud backup solution was viable. The rest of this section will assume we're performing a DRE.

First, follow the Backblaze article for backing up and restoring files from Backblaze B2 cloud storage. We will restore the files to a different server, in this case my old QNAP NAS sodium. You could also use rsync to copy the Borg data files to a different location on your file server disk, or to another host entirely (assuming you have the free space).

The Borg FAQ has an entry describing this very situation, but there's a few warnings about doing this. Borg doesn't currently integrate with any cloud services, the only remote network transport is SSH/SFTP. Backblaze B2 doesn't support this as far as I can tell, and Borg doesn't have a mechanism to clone a Borg server's repositories on disk.

The first warning,"Make sure you do the copy/sync while no backup is running, see borg with-lock about how to do that," is overcome by the Backblaze B2 tool backing up the latest Btrfs/snapper snapshot of the borg data subvolume. The snapshot is mounted read-only, and all Borg clients are writing to the main Btrfs subvolume (not the snapshot).

The next warning,"you must not run borg against multiple instances of the same repo (like repo and copy-of-repo) as that would create severe issues." In the DRE we're just testing that the restored Borg repositories are readable and intact. The restored repositories will not be written to. In my case, on sodium, I don't really trust the disks (they're twelve years old by now). In an actual disaster or catastrophy the original source Borg repositories no longer exist or are unrecoverable, so the dire warnings wouldn't directly apply. This will avoid having the same local cache for both repositories, and doesn't even get into the possibility of AES counter reuse. If we are updating the restored Borg repositories, then the original repository is gone for whatever reason.

To begin the DRE, I stopped the backblaze.timer on tennessine and the borg.timer units on my client system (ferrum):

sudo systemctl disable --now backblaze.timer # on tennessine
sudo systemctl disable --now borg.timer # on ferrum, my test client

This was to ensure the Backblaze B2 bucket was not actively being written to during the download operation, nor is the Borg process updating cache or AES counters on the client. In an actual disaster scenario disabling the backblaze.timer is not necessary, as it likely no longer exists (or is unavailable). If the client is lost as well, hopefully you have a backup of ~/.config/borg that isn't in B2, or at least is recoverable somewhere else (possibly a different B2 bucket, Git, safe deposit box, cloud password manager, etc.). The next step is to transfer the entire Backblaze bucket to sodium, a process that took twelve hours (608GiB at ~25MB/s). If you're just copying locally on the same machine, or to another machine on your local network, you can use rsync, cp, rclone, or some other like tool.

The next step is very important! Back up the ~/.config/borg directory on the client before proceeding! In a DRE you'll want to go back to business as usual (BAU) for your original repositories. This will allow you to go back to the way things were once your DRE is over. This directory contains the current cache, security information (including keys), and other important metadata. The step after this has us clear some of this information, so if you don't have a backup you won't be able to go back to BAU as if nothing happened.

WARNING: THIS STEP DELETES DATA, SO MAKE SURE YOU HAVE A BACKUP BEFORE EXECUTING! The bottom of this Borg FAQ has what we need to do for next steps. Note, the borg config repo id command will only work with local repositories, not remote. If your original repository is remote you'll have to look at the ~/.config/borg/security/<repo_id>/location file for the repository name, then pass that repo_id to the next borg commands:

rm ~/.config/borg/security/<repo_id>/manifest-timestamp
borg delete --cache-only backup@tennessine:repo

Note that the repository name is the original repository (in this example, backup@tennessine:repo). Change to be the full name to the original repository. If you have multiple repositories from the same client, you'll need to repeat this step for each repository you wish to restore.

Now we can tell borg on the client to mount the restored Borg repository (in my case, backup@sodium:repo):

borg mount backup@sodium:repo /mnt/borg/repo
Warning: Attempting to access a previously unknown encrypted repository!
Do you want to continue? [yN] y

At this point everything was looking good, I was able to mount the restored Borg repository. Navigating was a little slow, as the client would need to scan each archive and load it into cache before I could actually browse through it. This is more the nature of FUSE filesystems working on network shares, more than it being Borg's fault.

In a real DRE you would likely want to test whether the restored repository is intact, through borg check --verify-data on the client. Here's a script that I've set up to run once a month, that does just that:

#!/usr/bin/env zsh

BORG_REPO='backup@tennessine:repo'
export BW_ITEM='01234567-89ab-cdef-0123-456789abcdef'
export HOME=/root

_res=0
_err=()

# Necessary for bw not to throw an exception (otherwise can't read 
# ~/.config/Bitwarden\ CLI/data.json)
export HOME=/root

if BW_SESSION=$(bw unlock --passwordfile "${BWPATH}" --raw); then
    if BORG_PASSPHRASE=$(bw --session ${BW_SESSION} get password "${BW_ITEM}")
        export BORG_PASSPHRASE
        if ! borg check --verbose --verify-data ${BORG_REPO}; then
            _res=$(( $? + ${_res} ))
            _err+="repo verification failed!"
        fi
    else
        _res=$(( ${_res} + ${?} ))
        _err+="Could not retrieve Borg passphrase from Bitwarden!"
    fi
else
    _res=$(( ${_res} + ${?} ))
    _err+="Could not unlock Bitwarden!"
fi

if bw --quiet unlock --check ; then
    if ! bw --quiet lock; then
        _res=$(( ${_res} + ${?} ))
        _err+="Could not lock Bitwarden!"
    fi
fi

unset BORG_PASSPHRASE
unset BW_SESSION

if [[ ${_res} -gt 0 ]]; then
    echo "Borg verification failed!  Result code: ${_res}, Error array:  "
    for err in ${_err[@]}; do
        echo ${err}
    done
    exit ${_res}
fi


For the DRE, you'll want to use the borg check --verify-data command manually. Again, this uses the systemd credentials created for the borg.service systemd unit file. The borg-verify.service file looks like this:

# /etc/systemd/system/borg-verify.service
[Unit]
Wants=network.target
After=network-online.target

[Service]
Type=oneshot
SetCredentialEncrypted=bwp: \
        k6iUCUh0RJCQyvL8k8q1UyAAAAABAAAADAAAABAAAADdjdKwl13Dx6ZQpc4AAAAAgAAAA \
        AAAAAALACMA0AAAACAAAAAAfgAgQAX4TESIFNnpBrX8dokmypaE97P45bnqpVu2TtFrbN \
        sAEHAFkdzz0LjTlbQnmZDl7VJtrTgPoj0caf617TNILFJd6Vcuv9vrsLUIoCCO6RFa9sm \
        cuE8j+w3DvbclrN/shw/4+2HqR+53ICfxV8t3WcljgyCC/m5Zd615IABOAAgACwAAABIA \
        ID8H3RbsT7rIBH02CIgm/Gv1ukSXO3DMHmVQkDG0wEciABAAIK/tMGsd28TZZYzhQLpMn \
        wfyk9l6kROmPYI2s46im71aPwfdFuxPusgEfTYIiCb8a/W6RJc7cMweZVCQMbTARyIAAA \
        AAsLX5FIl2Q09hFFQ/PlgkPSRLS99eizTqg05nlblvYDkaLKhaXvZwll1417fdT0KaxVF \
        duc8QG07HwlPTtmdNVd0beWAwnFzB
Environment=BWPATH=%d/bwp
ExecStart=/usr/local/sbin/borg-verify-data.sh

And that's it for my Borg setup! I've explained how to set up a server for Borg, how to initialize Borg repositories and create archives, how to restore files from Borg, as well as how to restore the Borg server files after a catastrophic disaster! The next article will be about setting up Backblaze B2 cloud storage, I'm going to repeat it myself since I just recently set up encrypted Borg repositories on ferrum.

NEXT STEPS

This series continues:

Please leave feedback below if you have any comments or questions!