Since we can't look into the variable `pythoncheck.rc`, as it doesn't
exist, we skip the `when` check for checkmode -- its installation cannot
run under any circumstance in the mode anyways.
Automatically set up btrfs root and data filesystem, as well as external
HDD.
This automation change assumes a layout exactly as in current bob to
function by default, can be changed to any btrfs layout with the
`btrfs_mounts` configuration option, however.
Instead of having the file statically (and plain-text) in the repo
itself, we simply query `pass` for it instead.
Slightly cumbersome syntax since ansible (afaik) does not allow a
similar easy variable-enabled lookup as for become passwords, so we also
whipped it into a justfile to not have to type it each time.
The command line uses cat to receive the password as a 'file' on stdin.
Still a HACK should not be hard-coded but perhaps installed as a
runnable script on localhost for the role (e.g. `scan-paperless`)
which receives its scanner more dynamically.
Change the inclusion of backup containers so they actually work. They
check that restic is enabled globally, and that restic is enabled for
the individual stack they belong to. If either of the conditions is not
met they do not deploy.
This way we can simply enable restic globally with `restic_enable` and
by default all stacks will be backed up. But if we want to exclude
specific stacks from backups we can do so with the individual
`<role>_restic_enable = False` variable.
Finally found a good version of doing so with the help of the following
medium article: https://medium.com/opsops/is-defined-in-ansible-d490945611ae
which basically makes use of default fallbacks instead.
Each role (with outward-facing ingress needs) depends on caddy since
they depend on the availability of the 'caddy' network which is set in
that role.
Caddy in turn depends on docker.
If we only tag the geerlingguy docker 'role' as docker we do not always
install the necessary python dependencies for later working with ansible
docker compose and network roles.
Applying the docker tag to them we can target '--tags=docker' on
playbook run and be sure that all later roles will have the correct
dependencies.
If our chosen backup repo is a local one, each restic container needs to
mount the local path as a volume, otherwise the data is stuck in the
container itself.
Will pass through the hostname to any snapshots set up.
The hostname is _not_ derived from the random docker container string
but instead takes the name of the _host_ on which docker is running
(from ansible facts).
The hostname in combination with the tag should point to the correct
host -> stack which is being backed up.
Notifies double for each prune/check run which may need to be fixed.
Also custom notification contents cannot currently be passed.
Lastly, we should put identifying information into the notification body
(such as the hostname/container name for which the notification is
relevant).