Fixed the loop for authorized keys. While I read previously that the
Ansible module can take keys in the array format:
```yaml
key:
- key1
- key2
- ...
```
This seems to not be the case.
Instead, we now do a 'sub-loop' through all the existing authorized_keys
entries in the data structure, running the task once for each key.
This also means we can simplify the 'when' condition to only check the
data structure itself exists, not the key since we only loop once for
each existing key anyway.
More in-depth explanation on the subelements filter here:
https://docs.ansible.com/projects/ansible/latest/playbook_guide/playbooks_filters.html#combining-objects-and-subelements
Concise explanation of use here:
https://overflow.ducks.party/questions/56086290/how-can-i-traverse-nested-lists-in-ansible
One drawback:
we can now _not_ change the key setting in the module to be exclusive
(`exclusive: true` for `authorized_keys` module). As described in the
documentation, if there are more than one key for a user, this would
lead to the following keys overwriting the first key.
Currently do not know how to fix this, but we are not supplying
exclusive keys so it is fine for the moment.
Can be changed with `nfs_v4_only=false` which defaults to true.
Information taken from: https://wiki.debian.org/NFSServerSetup
and applied directly through Ansible.
Currently _irreversible_, meaning once we set the server to v4 only
there is NO ansible-supported playbook to reset it to all NFSv2/3/4
versions.
Has to be done manually, or could be included as manually-run playbook.
Moved the jellyfin installation to 10.11.x, so now we should pin it to a
minimum of that. Also, since the 'latest' container for the linuxserver
container images is still the 10.10.7 container, we can't just use that.
So we pin the exact version for now instead.
The terraform module does not expect its file contents (project_path) in
the 'files/' folder like the core roles, instead looking for it relative
to the _invocation_ pwd.
So, for now it just resides in the root level of the repository and may
be moved from there to wherever it is more pertinent.
Additionally, we check for the existence of the OpenTofu binary (tofu),
and prefer that if it exists. Otherwise we fall back to the Terraform
binary.
Instead of installing authorized keys globally (same for everybody), we
pass in the authorized_keys variable per user, and thus the installation
also takes place per user.
This makes much more sense and works with minimal refactoring.
Automatically set up btrfs root and data filesystem, as well as external
HDD.
This automation change assumes a layout exactly as in current bob to
function by default, can be changed to any btrfs layout with the
`btrfs_mounts` configuration option, however.
Still a HACK should not be hard-coded but perhaps installed as a
runnable script on localhost for the role (e.g. `scan-paperless`)
which receives its scanner more dynamically.
Change the inclusion of backup containers so they actually work. They
check that restic is enabled globally, and that restic is enabled for
the individual stack they belong to. If either of the conditions is not
met they do not deploy.
This way we can simply enable restic globally with `restic_enable` and
by default all stacks will be backed up. But if we want to exclude
specific stacks from backups we can do so with the individual
`<role>_restic_enable = False` variable.
Finally found a good version of doing so with the help of the following
medium article: https://medium.com/opsops/is-defined-in-ansible-d490945611ae
which basically makes use of default fallbacks instead.
Each role (with outward-facing ingress needs) depends on caddy since
they depend on the availability of the 'caddy' network which is set in
that role.
Caddy in turn depends on docker.
If our chosen backup repo is a local one, each restic container needs to
mount the local path as a volume, otherwise the data is stuck in the
container itself.
Will pass through the hostname to any snapshots set up.
The hostname is _not_ derived from the random docker container string
but instead takes the name of the _host_ on which docker is running
(from ansible facts).
The hostname in combination with the tag should point to the correct
host -> stack which is being backed up.
Notifies double for each prune/check run which may need to be fixed.
Also custom notification contents cannot currently be passed.
Lastly, we should put identifying information into the notification body
(such as the hostname/container name for which the notification is
relevant).
Adapted from cloudserve-infrastructure, implements a backup stack using
restic. The actual backups have to be implemented by individual other
roles but this sets up initialization, pruning and checking of a repository.
Explanation here:
https://github.com/qdm12/gluetun-wiki/blob/main/setup/advanced/vpn-port-forwarding.md
Whenever we receive a new forwarded port (around once a month?) we pass
it to qbit through its API. May require the setting no auth for local
connections in qbit.
Allows to remove the complete port-manager docker container which did
not work very well.