Can be changed with `nfs_v4_only=false` which defaults to true.
Information taken from: https://wiki.debian.org/NFSServerSetup
and applied directly through Ansible.
Currently _irreversible_, meaning once we set the server to v4 only
there is NO ansible-supported playbook to reset it to all NFSv2/3/4
versions.
Has to be done manually, or could be included as manually-run playbook.
Moved the jellyfin installation to 10.11.x, so now we should pin it to a
minimum of that. Also, since the 'latest' container for the linuxserver
container images is still the 10.10.7 container, we can't just use that.
So we pin the exact version for now instead.
Pipelining speeds up the playbook execution. It _can_ have some negative
effects on 'sudo' execution, and specifically requires `requiretty` not
enabled in the sudoers file.
Since this seems (by default) to be the case on debian distributions, I
am trying to switch to pipelining for the time being.
The terraform module does not expect its file contents (project_path) in
the 'files/' folder like the core roles, instead looking for it relative
to the _invocation_ pwd.
So, for now it just resides in the root level of the repository and may
be moved from there to wherever it is more pertinent.
Additionally, we check for the existence of the OpenTofu binary (tofu),
and prefer that if it exists. Otherwise we fall back to the Terraform
binary.
Instead of installing authorized keys globally (same for everybody), we
pass in the authorized_keys variable per user, and thus the installation
also takes place per user.
This makes much more sense and works with minimal refactoring.
Since we can't look into the variable `pythoncheck.rc`, as it doesn't
exist, we skip the `when` check for checkmode -- its installation cannot
run under any circumstance in the mode anyways.
Automatically set up btrfs root and data filesystem, as well as external
HDD.
This automation change assumes a layout exactly as in current bob to
function by default, can be changed to any btrfs layout with the
`btrfs_mounts` configuration option, however.
Instead of having the file statically (and plain-text) in the repo
itself, we simply query `pass` for it instead.
Slightly cumbersome syntax since ansible (afaik) does not allow a
similar easy variable-enabled lookup as for become passwords, so we also
whipped it into a justfile to not have to type it each time.
The command line uses cat to receive the password as a 'file' on stdin.
Still a HACK should not be hard-coded but perhaps installed as a
runnable script on localhost for the role (e.g. `scan-paperless`)
which receives its scanner more dynamically.
Change the inclusion of backup containers so they actually work. They
check that restic is enabled globally, and that restic is enabled for
the individual stack they belong to. If either of the conditions is not
met they do not deploy.
This way we can simply enable restic globally with `restic_enable` and
by default all stacks will be backed up. But if we want to exclude
specific stacks from backups we can do so with the individual
`<role>_restic_enable = False` variable.
Finally found a good version of doing so with the help of the following
medium article: https://medium.com/opsops/is-defined-in-ansible-d490945611ae
which basically makes use of default fallbacks instead.
Each role (with outward-facing ingress needs) depends on caddy since
they depend on the availability of the 'caddy' network which is set in
that role.
Caddy in turn depends on docker.
If we only tag the geerlingguy docker 'role' as docker we do not always
install the necessary python dependencies for later working with ansible
docker compose and network roles.
Applying the docker tag to them we can target '--tags=docker' on
playbook run and be sure that all later roles will have the correct
dependencies.
If our chosen backup repo is a local one, each restic container needs to
mount the local path as a volume, otherwise the data is stuck in the
container itself.