Compare commits

...

66 commits

Author SHA1 Message Date
c0a264f1e8
chore(forgejo): Upgrade to LTS v11 2025-06-22 11:32:48 +02:00
6e3b5f47c7
chore(linkding): Move to ghcr repository 2025-05-21 16:22:00 +02:00
814f1e008f
feat(docker): Add docker stack cleaning role
Runs before setting up any new stacks or pursue other modifications to
docker deployments.

Brings down any stack which is not currently defined in a role. This
makes the whole installation more idempotent since we take care to not
only bring _up_ any necessary docker containers, but also bring _down_
those that have become unnecessary.
2025-03-19 17:04:22 +01:00
4671801a84
fix(repo): Remove production inventory from non-production branch 2025-03-19 17:04:21 +01:00
274b314a9e
ref(linkding): Replace shaarli with linkding
Deprecate shaarli and remove it from the default site setup.
2025-03-16 00:22:08 +01:00
33d19e9373
feat(linkding): Add linkding stack
Bookmarking software similar to shaarli but a bit more featureful. And
not written in php, thankfully.
2025-03-15 22:46:02 +01:00
83613f6d86
feat(roles): Add auto updating to some roles
Miniflux, searx, shaarli and wallabag will be automatically updated by
shepherd.
2025-03-15 22:29:55 +01:00
9f3274dae7
feat(landingpage): Automatically update 2025-03-15 22:29:54 +01:00
fecf14a5bc
feat(site): Change out diun with shepherd 2025-03-15 22:29:54 +01:00
2dfe9f9b92
feat(shepherd): Add auto update shepherd role
Deprecates diun as it provides a simpler implementation for docker
swarm. Mark any containers you want auto updated with
`shepherd.autoupdate=true` and the rest with
`shepherd.autoupdate=false`. Everything untagged will not be watched (by
default), though this can be changed by setting the ansible default
variable `shepherd_filter_services: `.
2025-03-15 22:29:53 +01:00
bc9104c3e8
chore(landingpage): Fix container image url 2025-03-15 22:29:52 +01:00
3418f85ffd
chore(landingpage): Switch to ghcr hosted docker image 2025-03-15 22:29:52 +01:00
ea077958ce fix(forgejo): Update to correct woodpecker versions 2025-02-16 21:45:14 +01:00
7543170f75
chore(restic): By default run check every Sunday night
And check a larger subset of the data with 15%.
2025-02-03 21:36:18 +01:00
90e45cacda
chore(restic): Do not require caddy id for the role 2025-02-03 21:35:44 +01:00
a4ccdb9884
fix(restic): Fix docker stack environment variables 2025-02-03 21:35:25 +01:00
0d7e99763f
feat(nextcloud): Add caddy server HSTS preload, webfinger 2025-02-03 21:34:58 +01:00
1a3fd9160e
fix(restic): Add role to site deployment 2025-02-03 18:59:58 +01:00
557f20d7b4
feat(shaarli): Add backups
Add restic backup functionality for shaarli data.
2025-02-03 18:58:12 +01:00
af4cfc5a4b
fix(nextcloud): Default to backups enabled
Backups should be enabled by default if available.
2025-02-03 18:57:52 +01:00
135aadf3a0
feat(restic): Add restic backup maintenance stack
Sets up regular backup maintenance for a restic (S3) backend, and
enables global variables for other roles to use for their individual
backup. Example found in nextcloud role.
2025-02-03 18:45:33 +01:00
eaeeb4ed6c
feat(nextcloud): Add simple restic backup 2025-01-28 16:50:33 +01:00
36ff0fb5fa
feat(nextcloud): Add imaginary container for thumbnails 2025-01-28 15:55:52 +01:00
7e1381913c
chore(nextcloud): Update to Nextcloud 30 2025-01-28 15:55:28 +01:00
fa9bac81af
feat(nextcloud): Add adjustable php memory/upload limits
Can be adjusted through nextcloud default settings.
2025-01-05 20:48:25 +01:00
84dcf7d128
feat(forgejo): Allow setting S3 checksum algorithm as variable
Can take either `default` (for MinIO, garage, AWS) or `md5` (Cloudflare,
Backblaze).
2024-09-28 10:30:58 +02:00
a6b8e6ffcd
chore(forgejo): Update to forgejo 8 2024-09-27 10:43:17 +02:00
46b6b9a8a4
chore(forgejo): Fix mailer tls protocol configuration
Update configuration for mailer to use new 'PROTOCOL' configuration
option instead of old 'IS_TLS_ENABLED'.
2024-09-27 10:05:33 +02:00
409f50a5ef
feat(forgejo): Allow enabling git lfs 2024-09-27 09:42:47 +02:00
0658971dbb
chore(forgejo): Update mailer settings for new configuration
Split 'SMTP_HOST' variable into 'SMTP_ADDR' and 'SMTP_PORT' to follow
updated configuration style.
2024-09-27 09:42:27 +02:00
174ad5a5fb
feat(forgejo): Add s3 configuration options
Sets s3 storage for all available subsystems, more information here:
https://forgejo.org/docs/latest/admin/storage/

Does *not* set repositories to be hosted on s3 since forgejo does not
support it.
2024-09-27 08:36:41 +02:00
29ccedf146
fix(forgejo): Fix default landing page configuration
Was missing underscore to be set correctly.
2024-09-27 08:35:13 +02:00
801d4b751b
Update Nextcloud major version to 29 2024-06-27 18:23:35 +02:00
be875edea9
Only update docker when run explicitly
Docker should only be updated when run explicitly as it currently
requires a re-run of the complete playbook afterwards (does not work for
single-tag deployments e.g.) since it will recreate caddy container and
lose all reverse proxy information.
2024-06-27 18:23:15 +02:00
e8447a6289
Add diun role 2024-06-25 12:20:46 +02:00
b6f7934c5f
Add gitea as potential woodpecker agent target
In addition to the connected forgejo instance, we can now also target a
remote gitea instance for woodpecker agents, should we want to.
2024-06-24 22:02:39 +02:00
86dd20fbf0
Remove some services from default deployment
Services I have not used or not used for a long time will now not be
deployed by default (but could still be specifically targeted through
tags).
2024-06-24 20:51:40 +02:00
b3f201ed7d
Pin exact caddy version
Stay on the exact version unless it is specifically told to upgrade.
This is a first-step workaround for the (non-)idempodency issue of the
caddy container's json config injection.
2024-06-24 20:50:58 +02:00
c498b3ced8
Apply prettier formatting 2024-06-24 20:36:55 +02:00
6b4c4ccde4
Update dependencies to enable easy single-tag deployments
Previously every deployment (even just for a single tag, such as
`ansible-playbook site.yml --tags landingpage`) would have the caddy
deployment in its dependency.

That meant in effect whenever there was an updated caddy image, the role
would update it and we would lose all previous caddy configuration -
which in turn would necessitate a complete redeploymnet of all steps.
This is now not the case anymore.
2024-06-24 20:24:04 +02:00
3171aa5ead
Make zerossl usage depend on having an api key 2024-06-24 18:56:37 +02:00
9ec5b6dec6
Switch site playbook to use forgejo 2024-06-24 18:30:34 +02:00
648f49a847
Move from gitea to forgejo
Moved all variables over; moved git passthrough script to new location
and naming scheme; moved settings and mentions of gitea name; switched
ci woodpecker instance to use forgejo instead of gitea.
2024-06-24 18:17:01 +02:00
b6e30811dc
Fix shaarli version and image source
Shaarli images moved a while ago and received a different tag naming scheme.
So we changed to the new repository and renamed the version from latest
to release.
2024-04-11 13:08:06 +02:00
b3d84b6075
Set Nextcloud php upload limit to 2GB 2024-04-11 13:07:22 +02:00
38b32a66e5
Reduce gitea healthy-await delay
We waited for 60 seconds previously which is exactly when the
supplied ssh key would disappear in my setup. So instead we
wait for slightly shorter (55 seconds) to ease this for me.
2024-04-11 13:07:10 +02:00
7fb14b07a8
Remove nextcloud db readiness check
We instead just wait for the db to be up with the usual docker
wait commands. A little more brittle but the old method ceased
to work.
2024-04-11 13:06:10 +02:00
ff49856107
Pint Nextcloud to current stable version 2024-04-11 13:05:15 +02:00
948ca7517a
Always update docker requirements to latest versions 2024-04-11 13:05:01 +02:00
d3f65a07fb
Fix wget healthchecks to not use localhost
For a reason, current wget versions error out when using localhost instead
of 127.0.0.1 as the healthcheck for docker services. Probably has something
to do with dns resolution - either on docker or wget end, but have not
looked to deep into it.
2024-04-11 13:04:28 +02:00
bc7796710a
Pin Nextcloud version to current stable release 2023-12-08 22:50:01 +01:00
26cceccfd9
Update Nextcloud internal Caddyfile
Add suggested security improvements and static file
caching.
2023-12-08 22:49:43 +01:00
388a1d8cfc
Separate caddy container id grabbing into own role
Since other roles often rely on this not an actual new caddy server
installation we should probably have it as its own little role.
2023-12-08 20:35:51 +01:00
a52cab2f61
Refactor wallabag stack name and repo variables
Brought in line with other stack naming schemes.
2023-12-08 20:34:41 +01:00
9cf43d0d5d
Fix new stat module checksum option
In the module get_md5 has been replaced by get_checksum.
2023-12-08 20:34:07 +01:00
d4dbeb4eb4
Improve gitea stability on first launch
When launching many containers gitea admin waiting still sometimes gets stuck.
This should provide a bandaid for now. Also improve the container detection.
2023-12-08 20:31:15 +01:00
2d01350fa5
Switch to new landingpage and remove old blog
New landingpage includes the blog itself to better
integrate with the main page. Also runs on astro
not on hugo which I am a little more familiar with.
2023-12-08 20:28:44 +01:00
7d8408f9f8
Change become arguments to boolean
Changed all 'become: ' values from 'yes' to 'true' to satisfy the schema
(and also make the lsp shut up).
2022-12-18 16:02:32 +01:00
385cb3859c
Remove whoami from default site playbook
whoami should be used as a test and debugging container and should not
be necessary or used for production deployment.
2022-12-18 15:53:26 +01:00
1ceee17eda
Add local test setup to ignored files 2022-12-18 15:50:23 +01:00
926f1f475f
Fix ntfy settings
Fixed numeric settings for ntfy and a corrected command executed.
2022-12-18 15:47:14 +01:00
8aaefd3f60
Fix gitea admin deployment to be less brittle
Admin deployment was very timing-dependent: If the server took a while
to set it up, it would always error out while deploying. This commit
adds sufficient grace-time into the admin request call before the error
occurs which should avoid it in most deployments (unless the server is
severely underpowered or over-taxed).

Also fixes admin creation to avoid root usage in the container when it
is not called for.
2022-12-18 12:00:33 +01:00
32b1b13ef4
Add ntfy role
Installs and configures the ntfysh server to enable notifications.
2022-01-23 20:00:47 +01:00
1e0643352d
Fix gitea admin setup, Add healthcheck
Added healthcheck to gitea database contaier.

Fixed initial admin setup checks - uses correct in-container user and
fixed fail checks.
2022-01-22 10:48:31 +01:00
06bb34891e
Add simple ci deployment 2021-12-22 18:02:18 +01:00
3ee003f94c
Fix blog upstream setting
Removed setting the landingpage upstream accidentally, switched its
alias to blog instead.
2021-12-19 10:09:25 +01:00
141 changed files with 1918 additions and 820 deletions

1
.gitignore vendored
View file

@ -60,3 +60,4 @@ tags
# End of https://www.toptal.com/developers/gitignore/api/vim,linux,vagrant,ansible
development.yml
single-test.yml

View file

@ -12,7 +12,7 @@ vagrant plugin install vagrant-hosts vagrant-hostsupdater
```
Additionally, since the test setup mirrors the production setup in that it makes use of subdomains for the individual hosted applications,
the server needs to be reachable under a domain name,
the server needs to be reachable under a domain name,
not just an IP address.
For now this is most simply accomplished through editing the hosts file, e.g.:
@ -23,21 +23,20 @@ For now this is most simply accomplished through editing the hosts file, e.g.:
```
This will allow you to reach the main domain under `http(s)://ansible.test` and sets up two subdomains that can be reached.
Be aware that the hosts file does not support subdomain wildcards.
You will have to specify each hostname individually or use a tool such as `dnsmasq`.
Be aware that the hosts file does not support subdomain wildcards.
You will have to specify each hostname individually or use a tool such as `dnsmasq`.
Read more [here](https://serverfault.com/questions/118378/in-my-etc-hosts-file-on-linux-osx-how-do-i-do-a-wildcard-subdomain).
Then you are ready to run the complete infrastructure setup locally,
Then you are ready to run the complete infrastructure setup locally,
simply by executing `ansible-playbook site.yml`.
You can of course pick and choose what should be executed with host limits, tags, group variables, and so on,
but this should provide an easy way to see if a) the playbook is working as intended and b) what it does is useful.
## Deployment
Most variables to be changed should be set either through `group_variables` or `host_variables`.
For my deployment I have a `production` group under `group_variables` which houses both a `vars.yml` containing basic variables
(like `server_domain`, `caddy_email`, etc.)
(like `server_domain`, `caddy_email`, etc.)
and a `vault.yml` which houses everything that should ideally not be lying around in plain-text
(individual container and database passwords for the various roles etc).

View file

@ -1,21 +1,21 @@
---
docker_swarm_advertise_addr: eth1
caddy_use_debug: yes
caddy_tls_use_staging: yes
blog_use_https: no
caddy_use_https: no
gitea_use_https: no
blog_use_https: no
caddy_use_https: no
forgejo_use_https: no
landingpage_use_https: no
miniflux_use_https: no
monica_use_https: no
nextcloud_use_https: no
searx_use_https: no
shaarli_use_https: no
traggo_use_https: no
wallabag_use_https: no
whoami_use_https: no
miniflux_use_https: no
monica_use_https: no
nextcloud_use_https: no
ntfy_use_https: no
searx_use_https: no
shaarli_use_https: no
traggo_use_https: no
wallabag_use_https: no
whoami_use_https: no
server_domain: ansible.test

View file

@ -1,8 +0,0 @@
prod:
hosts:
ssdnodes:
docker_swarm_manager_node:
hosts:
ssdnodes:

View file

@ -1,37 +0,0 @@
# landingpage
The public face of my server.
Not much to see here honestly,
just a few simple lines of html explaining what this server is about and how to contact me.
I don't see anybody else benefiting massively from this role but me,
but if you want the same web presence go for it I suppose 😉
## Defaults
```
landingpage_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
```
The on-target directory where the proxy configuration file should be stashed.
```
landingpage_use_https: true
```
Whether the service should be reachable through http (port 80) or through https (port 443) and provision an https certificate. Usually you will want this to stay `true`.
```
landingpage_version: latest
```
The docker image version to be used in stack creation.
```
subdomain_alias: www
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `www.yourdomain.com` -
if this option is not set it will be served on `landingpage.yourdomain.com` instead.

View file

@ -1,11 +0,0 @@
---
# never got around to removing the master tag from the images
blog_version: master
blog_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
blog_use_https: true
# the subdomain link blog will be reachable under
# subdomain_alias: blog

View file

@ -1,14 +0,0 @@
---
galaxy_info:
author: Marty Oehme
description: Installs my personal public facing landing page as a docker stack service
license: GPL-3.0-only
min_ansible_version: 2.9
galaxy_tags: []
dependencies:
- docker
- docker-swarm
- caddy

View file

@ -1,20 +0,0 @@
version: '3.4'
services:
app:
image: "{{ stack_image }}:{{ blog_version }}"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
entrypoint: sh -c "/docker-entrypoint.sh nginx -g 'daemon off;'"
networks:
- "{{ docker_swarm_public_network_name }}"
networks:
"{{ docker_swarm_public_network_name }}":
external: true

View file

@ -1,7 +1,7 @@
# Caddy
# Caddy
Caddy is the reverse proxy for all other services running on the infrastructure.
It was chosen for its relative ease of use,
It was chosen for its relative ease of use,
interactible API and https-by-default setup.
## Variables
@ -48,28 +48,27 @@ caddy_version: alpine
Sets the docker image version to be used.
## Internal variables
```yaml
caddy_stack:
name: caddy
compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
name: caddy
compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
```
Defines the actual docker stack which will later run on the target.
The name can be changed and will be used as a proxy target (`caddy.mydomain.com` or `192.168.1.1/caddy`) ---
Defines the actual docker stack which will later run on the target.
The name can be changed and will be used as a proxy target (`caddy.mydomain.com` or `192.168.1.1/caddy`) ---
though to be clear there is no intention currently to expose the caddy to the web at the moment.\
The compose option defines which template to use for the `docker-stack.yml` file. You can either change options for the stack in the template file,
The compose option defines which template to use for the `docker-stack.yml` file. You can either change options for the stack in the template file,
or directly here like the following:
```yaml
compose:
- "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
- version: '3'
services:
another-container:
image: nginx:latest
compose:
- "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
- version: "3"
services:
another-container:
image: nginx:latest
# ...
```

View file

@ -1,6 +1,5 @@
---
caddy_version: alpine
caddy_version: 2.8.4-alpine # tag exact version to avoid suprising container renewals
caddy_caddyfile_dir: "{{ docker_stack_files_dir }}/caddy"
caddy_use_debug: no
@ -9,3 +8,4 @@ caddy_use_https: yes
caddy_tls_use_staging: no
# caddy_email: your@email.here
# caddy_zerossl_api_key: your-zerossl-key-here-its-free

View file

@ -1,5 +1,3 @@
---
dependencies:
- docker
- docker-swarm

View file

@ -5,9 +5,9 @@
ansible.builtin.file:
path: "{{ caddy_caddyfile_dir }}"
state: directory
mode: '0755'
mode: "0755"
become: true
tags:
tags:
- fs
- name: Ensure Caddyfile exists
@ -27,47 +27,9 @@
compose:
- "{{ caddy_stack.compose }}"
when: caddy_stack is defined
become: yes
become: true
tags:
- docker-swarm
- name: Get caddy container info
ansible.builtin.command:
cmd: docker ps -q -f name={{ caddy_stack.name }}
become: yes
# bringing up the container takes some time, we have to wait
until: caddy_container_info['rc'] == 0 and caddy_container_info['stdout'] | length >= 1
retries: 5
delay: 10
changed_when: False
register: caddy_container_info
- name: Register caddy container id
ansible.builtin.set_fact: caddy_container_id={{ caddy_container_info['stdout'] }}
notify:
- debug caddy container
# FIXME this should be taken care of in Dockerfile not here
- name: Ensure caddy curl available
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
apk add curl
become: yes
register: result
changed_when: "'Installing' in result.stdout"
- name: Ensure caddy api is responsive
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/config/
become: yes
until: result.rc == 0
when: caddy_use_api == True
changed_when: False
register: result
# TODO FIXME UP
# - name: Allow access to services
# firewalld:

View file

@ -51,17 +51,19 @@
{% if caddy_tls_use_staging is sameas true %}
"ca": "https://acme-staging-v02.api.letsencrypt.org/directory",
{% endif %}
{%- if caddy_email is not undefined and not none %}
{%- if caddy_email is not undefined and not none %}
"email": "{{ caddy_email }}",
{% endif %}
"module": "acme"
{%- if caddy_zerossl_api_key is not undefined and not none %}
},
{
{%- if caddy_email is not undefined and not none %}
"email": "{{ caddy_email }}",
{% endif %}
"api_key": "{{ caddy_zerossl_api_key }}",
"module": "zerossl"
}
{% else %}
}
{% endif %}
]
}
]

View file

@ -5,7 +5,7 @@ services:
image: caddy:{{ caddy_version }}
command: caddy run --config /etc/caddy/config.json
healthcheck:
test: ["CMD", "wget", "--quiet", "--spider", "--tries=1", "http://localhost:2019/metrics"]
test: ["CMD", "wget", "--quiet", "--spider", "--tries=1", "http://127.0.0.1:2019/metrics"]
interval: 1m
timeout: 10s
retries: 3

View file

@ -1,5 +1,4 @@
---
caddy_stack:
name: caddy
compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"

83
roles/caddy_id/README.md Normal file
View file

@ -0,0 +1,83 @@
# Caddy
Caddy is the reverse proxy for all other services running on the infrastructure.
It was chosen for its relative ease of use,
interactible API and https-by-default setup.
## Variables
```
caddy_caddyfile_dir: "{{ docker_stack_files_dir }}/caddy"
```
Sets up the on-target directory where important caddy files should be stored.
```
caddy_email: <your@email.here>
```
Which e-mail should be used to provision https certificates with. I believe theoretically caddy will work and provision you with certificates even without providing an e-mail, but I would strongly urge providing one.
```
caddy_tls_use_staging: no
```
If turned on will use the staging servers of the acme certificate service, which is useful for testing and playing around with https (due to higher API limits and less severe restrictions).
```
caddy_use_api: yes
```
If turned off, will turn off the admin api for caddy. Should only be used if no other services are intended to be provisioned on the target, since most other service stacks rely on the API to set up their proxy targets.
```
caddy_use_debug: no
```
If true, will turn on caddy's debug logging.
```
caddy_use_https: yes
```
If turned off will turn of all auto-provisioning of https certificates by caddy.
```
caddy_version: alpine
```
Sets the docker image version to be used.
## Internal variables
```yaml
caddy_stack:
name: caddy
compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
```
Defines the actual docker stack which will later run on the target.
The name can be changed and will be used as a proxy target (`caddy.mydomain.com` or `192.168.1.1/caddy`) ---
though to be clear there is no intention currently to expose the caddy to the web at the moment.\
The compose option defines which template to use for the `docker-stack.yml` file. You can either change options for the stack in the template file,
or directly here like the following:
```yaml
compose:
- "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
- version: "3"
services:
another-container:
image: nginx:latest
# ...
```
```yaml
caddy_http_server_name: http
```
```yaml
caddy_https_server_name: https
```
The internal representation of the http and https servers respectively.

View file

@ -0,0 +1,3 @@
---
dependencies:
- docker-swarm

View file

@ -0,0 +1,39 @@
---
# get the caddy container id for all other containers
- name: Get caddy container info
ansible.builtin.command:
cmd: docker ps -q -f name={{ caddy_stack.name }}
become: true
# bringing up the container takes some time, we have to wait
until: caddy_container_info['rc'] | default('') == 0 and caddy_container_info['stdout'] | length >= 1
retries: 5
delay: 10
changed_when: False
register: caddy_container_info
- name: Register caddy container id
ansible.builtin.set_fact: caddy_container_id={{ caddy_container_info['stdout'] }}
notify:
- debug caddy container
# FIXME this should be taken care of in Dockerfile not here
- name: Ensure caddy curl available
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
apk add curl
become: true
register: result
changed_when: "'Installing' in result.stdout"
- name: Ensure caddy api is responsive
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/config/
become: true
until: result.rc | default('') == 0
when: caddy_use_api == True
changed_when: False
register: result

View file

@ -0,0 +1,72 @@
{
{% if caddy_use_api is sameas false %}
"admin": {
"disabled": true
},
{% endif %}
{% if caddy_use_debug is sameas true %}
"logging": {
"logs": {
"default": {
"level": "DEBUG"
}
}
},
{% endif %}
"apps": {
"http": {
"servers": {
"{{ caddy_http_server_name }}": {
"listen": [
":80"
],
"routes": []
{% if caddy_use_https is sameas false %},
"automatic_https": {
"disable": true
}
{% endif %}
},
"{{ caddy_https_server_name }}": {
"listen": [
":443"
],
"routes": []
{% if caddy_use_https is sameas false %},
"automatic_https": {
"disable": true
}
{% endif %}
}
}
}
{% if caddy_use_https is sameas true %},
"tls": {
"automation": {
"policies": [
{
"subjects": [],
"issuers": [
{
{% if caddy_tls_use_staging is sameas true %}
"ca": "https://acme-staging-v02.api.letsencrypt.org/directory",
{% endif %}
{%- if caddy_email is not undefined and not none %}
"email": "{{ caddy_email }}",
{% endif %}
"module": "acme"
},
{
{%- if caddy_email is not undefined and not none %}
"email": "{{ caddy_email }}",
{% endif %}
"module": "zerossl"
}
]
}
]
}
}
{% endif %}
}
}

View file

@ -0,0 +1,30 @@
version: "3.7"
services:
app:
image: caddy:{{ caddy_version }}
command: caddy run --config /etc/caddy/config.json
healthcheck:
test: ["CMD", "wget", "--quiet", "--spider", "--tries=1", "http://127.0.0.1:2019/metrics"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
ports:
- "80:80"
- "443:443"
volumes:
- "{{ caddy_caddyfile_dir }}:/etc/caddy"
- "{{ docker_stack_files_dir }}:/stacks:ro"
- data:/data
- config:/config
networks:
- "{{ docker_swarm_public_network_name }}"
volumes:
data:
config:
networks:
"{{ docker_swarm_public_network_name }}":
external: true

View file

@ -0,0 +1,5 @@
---
caddy_stack:
name: caddy
caddy_use_api: yes # if no turns off api interface; it is *required* for other swarm roles to be routed

5
roles/diun/README.md Normal file
View file

@ -0,0 +1,5 @@
# diun
Monitor the deployed swarm containers for updates.
Will notify you when it found any update for any container.
Can (currently) notify you either through mail or on matrix.

View file

@ -0,0 +1,26 @@
---
diun_version: 4
diun_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
diun_use_https: true
# the subdomain link diun will be reachable under
subdomain_alias: diun
diun_tz: Europe/Berlin
diun_log_level: info
diun_watch_swarm_by_default: true
diun_notif_mail_host: localhost
diun_notif_mail_port: 25
# diun_notif_mail_username: required for mail
# diun_notif_mail_password: required for mail
# diun_notif_mail_from: required for mail
# diun_notif_mail_to: required for mail
diun_notif_matrix_url: "https://matrix.org"
#diun_notif_matrix_user: required for matrix
#diun_notif_matrix_password: required for matrix
#diun_notif_matrix_roomid: required for matrix

10
roles/diun/meta/main.yml Normal file
View file

@ -0,0 +1,10 @@
---
galaxy_info:
author: Marty Oehme
description: Notify on any docker swarm container updates
license: GPL-3.0-only
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker-swarm

12
roles/diun/tasks/main.yml Normal file
View file

@ -0,0 +1,12 @@
---
## install diun container
- name: Deploy diun to swarm
community.general.docker_stack:
name: "{{ stack_name }}"
state: present
prune: yes
compose:
- "{{ stack_compose }}"
become: true
tags:
- docker-swarm

View file

@ -0,0 +1,51 @@
version: '3.4'
services:
app:
image: crazymax/diun:latest
# healthcheck:
# test: ["CMD", "wget", "--spider", "-q", "127.0.0.1"]
# interval: 1m
# timeout: 10s
# retries: 3
# start_period: 1m
command: serve
volumes:
- "data:/data"
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
- "TZ={{ diun_tz }}"
- "LOG_LEVEL={{ diun_log_level }}"
- "LOG_JSON=false"
- "DIUN_WATCH_WORKERS=20"
- "DIUN_WATCH_SCHEDULE=0 */6 * * *"
- "DIUN_WATCH_JITTER=30s"
- "DIUN_PROVIDERS_SWARM=true"
- "DIUN_PROVIDERS_SWARM_WATCHBYDEFAULT={{ diun_watch_swarm_by_default }}"
{% if diun_notif_matrix_user is not undefined and not None and diun_notif_matrix_password is not undefined and not None and diun_notif_matrix_roomid is not undefined and not None %}
- "DIUN_NOTIF_MATRIX_HOMESERVERURL={{ diun_notif_matrix_url }}"
- "DIUN_NOTIF_MATRIX_USER={{ diun_notif_matrix_user }}"
- "DIUN_NOTIF_MATRIX_PASSWORD={{ diun_notif_matrix_password }}"
- "DIUN_NOTIF_MATRIX_ROOMID={{ diun_notif_matrix_roomid }}"
{% endif %}
{% if diun_notif_mail_username is not undefined and not None and diun_notif_mail_password is not undefined and not None and diun_notif_mail_from is not undefined and not None and diun_notif_mail_to is not undefined and not None %}
- "DIUN_NOTIF_MAIL_HOST={{ diun_notif_mail_host }}"
- "DIUN_NOTIF_MAIL_PORT={{ diun_notif_mail_port }}"
- "DIUN_NOTIF_MAIL_USERNAME={{ diun_notif_mail_username }}"
- "DIUN_NOTIF_MAIL_PASSWORD={{ diun_notif_mail_password }}"
- "DIUN_NOTIF_MAIL_FROM={{ diun_notif_mail_from }}"
- "DIUN_NOTIF_MAIL_TO={{ diun_notif_mail_to }}"
{% endif %}
# deploy:
# mode: replicated
# replicas: 1
# placement:
# constraints:
# - node.role == manager
volumes:
data:
networks:
"{{ docker_swarm_public_network_name }}":
external: true

View file

@ -1,7 +1,6 @@
---
stack_name: diun
stack_name: blog
stack_image: "registry.gitlab.com/cloud-serve/blog"
stack_image: "crazymax/diun"
stack_compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"

View file

@ -0,0 +1,12 @@
---
- name: Get running docker stacks
community.docker.docker_stack_info:
register: running_stacks
become: true
- name: Remove stacks without matching role
community.docker.docker_stack:
name: "{{ item.Name }}"
state: "absent"
loop: "{{ running_stacks.results | rejectattr('Name', 'in', role_names) }}"
become: true

View file

@ -1,5 +1,3 @@
---
docker_stack_files_dir: /stacks
docker_swarm_public_network_name: public

View file

@ -0,0 +1,3 @@
---
dependencies:
- docker

View file

@ -28,7 +28,7 @@
ansible.builtin.file:
path: "{{ docker_stack_files_dir }}"
state: directory
mode: '0755'
mode: "0755"
become: true
tags:
tags:
- fs

View file

@ -4,4 +4,4 @@
state: started
enabled: yes
daemon_reload: yes
become: yes
become: true

View file

@ -1,7 +1,7 @@
- name: Ensure requirements installed
ansible.builtin.package:
name: "{{ requisites }}"
state: present
state: latest
update_cache: yes
tags:
- apt
@ -11,11 +11,14 @@
- name: Ensure docker GPG apt key exists
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
url: "https://download.docker.com/linux/ubuntu/gpg"
state: present
tags:
- apt
- repository
# FIXME: Needs a 'until:' defined for the retries to actually work
retries: 3
delay: 5
become: true
- name: Ensure docker repository exists
@ -27,10 +30,10 @@
- repository
become: true
- name: Ensure latest docker-ce installed
- name: docker-ce is installed
ansible.builtin.package:
name: "{{ packages }}"
state: latest
state: present
tags:
- apt
- download
@ -38,9 +41,22 @@
become: true
notify: Handle docker daemon
- name: Latest docker-ce is installed
ansible.builtin.package:
name: "{{ packages }}"
state: latest
tags:
- apt
- download
- packages
- docker
- never
become: true
notify: Handle docker daemon
- name: Ensure docker requisites for python installed
pip:
name:
name:
- docker
- jsondiff
- pyyaml

40
roles/forgejo/README.md Normal file
View file

@ -0,0 +1,40 @@
# forgejo
A relatively light-weight git server hosting.
## Defaults
```
forgejo_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
```
The on-target directory where the proxy configuration file should be stashed.
```
forgejo_use_https: true
```
Whether the service should be reachable through http (port 80) or through https (port 443) and provision an https certificate. Usually you will want this to stay `true`.
```
forgejo_version: latest
```
The docker image version to be used in stack creation.
```
subdomain_alias: git
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `git.yourdomain.com` -
if this option is not set it will be served on `forgejo.yourdomain.com` instead.
For now forgejo will still need to be initially set up after installation.
This could be automated with the help of these commands:
```sh
docker run --name forgejo -p 8080:3000 -e FORGEJO__security__INSTALL_LOCK=true -d codeberg.org/forgejo/forgejo:7
$ docker exec forgejo migrate
$ docker exec forgejo forgejo admin user create --admin --username root --password admin1234 --email admin@example.com
```

View file

@ -0,0 +1,50 @@
---
forgejo_version: 11
forgejo_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
forgejo_use_https: true
# the subdomain link forgejo will be reachable under
subdomain_alias: git
subdomain_ci_alias: ci
forgejo_db_database: forgejo
forgejo_db_username: forgejo
forgejo_db_password: forgejo
forgejo_app_admin_username: Myforgejousername # can not be set to admin in Forgejo
forgejo_app_admin_password: Myforgejopassword
forgejo_app_admin_email: myadmin@mydomain.mytld
# forgejo_smtp_addr: domain.com
# forgejo_smtp_port: 465
# forgejo_smtp_username: my@username.com
# forgejo_smtp_password: <password>
# forgejo_smtp_protocol: smtps # can be one of starttls | smtps
forgejo_use_lfs: false
forgejo_lfs_max_filesize: 0
forgejo_lfs_http_auth_expiry: 24h
# forgejo_lfs_jwt_secret:
forgejo_use_ci: false
# forgejo_ci_github_client:
# forgejo_ci_github_secret:
# forgejo_ci_gitlab_client:
# forgejo_ci_gitlab_secret:
# forgejo_ci_forgejo_client:
# forgejo_ci_forgejo_secret:
# forgejo_ci_gitea_url:
# forgejo_ci_gitea_client:
# forgejo_ci_gitea_secret:
forgejo_use_s3: false
forgejo_s3_use_ssl: true
forgejo_s3_bucket_lookup: auto # auto|dns|path
forgejo_s3_checksum: default # default|md5
# forgejo_s3_endpoint:
# forgejo_s3_region:
# forgejo_s3_key:
# forgejo_s3_secret:
# forgejo_s3_bucket:

View file

@ -0,0 +1,100 @@
- name: Add admin user
community.docker.docker_container_exec:
container: "{{ forgejo_app_container_name['stdout'] }}"
command: >
forgejo admin user create --admin --username {{ forgejo_app_admin_username }} --password {{ forgejo_app_admin_password }} --email {{ forgejo_app_admin_email }}
user: git
become: true
listen: "no admin user"
## Register reverse proxy
- name: Upstream directory exists
ansible.builtin.file:
path: "{{ forgejo_upstream_file_dir }}"
state: directory
mode: "0755"
become: true
listen: "update forgejo upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ forgejo_upstream_file_dir }}/upstream.json"
mode: "0600"
become: true
listen: "update forgejo upstream"
- name: Update ci upstream template
ansible.builtin.template:
src: upstream_ci.json.j2
dest: "{{ forgejo_upstream_file_dir }}/upstream_ci.json"
mode: "0600"
become: true
listen: "update forgejo upstream"
# figure out if upstream id exists
- name: check {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: true
listen: "update forgejo upstream"
# upstream already exists, patch it
- name: remove old {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update forgejo upstream"
# upstream has to be created
- name: add {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ forgejo_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (forgejo_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update forgejo upstream"
# figure out if upstream id exists
- name: check {{ stack_name }}_ci upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/id/{{ stack_name }}_ci_upstream/
changed_when: False
register: result
become: true
listen: "update forgejo upstream"
# upstream for ci already exists, patch it
- name: remove old {{ stack_name }}_ci upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_ci_upstream/
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update forgejo upstream"
# upstream for ci has to be created
- name: add {{ stack_name }}_ci upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ forgejo_upstream_file_dir }}/upstream_ci.json localhost:2019/config/apps/http/servers/{{ (forgejo_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update forgejo upstream"
- name: Ensure upstream directory is gone again
ansible.builtin.file:
path: "{{ forgejo_upstream_file_dir }}"
state: absent
become: true
listen: "update forgejo upstream"

View file

@ -1,16 +1,15 @@
---
galaxy_info:
author: Marty Oehme
description: Light-weight git hosting
license: GPL-3.0-only
min_ansible_version: 2.9
min_ansible_version: "2.9"
galaxy_tags: []
platforms:
- name: GenericLinux
versions: all
versions:
- all
dependencies:
- docker
- docker-swarm
- caddy
- caddy_id

View file

@ -0,0 +1,11 @@
---
## install requisites
- name: Ensure openssl installed
ansible.builtin.package:
name: "openssl"
state: present
become: true
tags:
- apt
- download
- packages

View file

@ -0,0 +1,132 @@
---
## Prepare woodpecker ci
- name: "Select tasks for {{ ansible_distribution }} {{ ansible_distribution_major_version }}"
include_tasks: "{{ distribution }}"
with_first_found:
- "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml"
- "{{ ansible_distribution }}.yml"
- "{{ ansible_os_family }}.yml"
loop_control:
loop_var: distribution
when: forgejo_use_ci == True
# TODO only generate when no existing (check with docker inspect?)
- name: Generate agent key
ansible.builtin.shell: openssl rand -hex 32
register: forgejo_woodpecker_agent_secret
when: forgejo_use_ci == True
- name: Set agent key
ansible.builtin.set_fact:
forgejo_woodpecker_agent_secret: "{{ forgejo_woodpecker_agent_secret.stdout }}"
when: forgejo_woodpecker_agent_secret.stdout is not undefined and not None
## Prepare forgejo
- name: Ensure git user exists with ssh key
ansible.builtin.user:
name: "{{ forgejo_git_username }}"
generate_ssh_key: yes
ssh_key_type: rsa
ssh_key_bits: 4096
ssh_key_comment: "Forgejo Host Key"
become: true
register: git_user
- name: Ensure git passthrough command directory exists
ansible.builtin.file:
path: "/app/forgejo/"
state: directory
mode: "0770"
owner: "{{ git_user['uid'] }}"
group: "{{ git_user['group'] }}"
become: true
- name: Passthrough git command is in right location
ansible.builtin.copy:
src: forgejo
dest: "/app/forgejo/forgejo"
owner: "{{ git_user['uid'] }}"
group: "{{ git_user['group'] }}"
mode: "0750"
become: true
- name: Host machine forgejo command points to passthrough command
ansible.builtin.file:
state: link
src: "/app/forgejo/forgejo"
dest: "/usr/local/bin/forgejo"
become: true
- name: Fetch keyfile
fetch:
src: "{{ git_user['home'] }}/.ssh/id_rsa.pub"
dest: "buffer/{{ansible_hostname}}-id_rsa.pub"
flat: yes
become: true
- name: Ensure git user has its own key authorized for access
ansible.posix.authorized_key:
user: "{{ git_user['name'] }}"
state: present
key: "{{ lookup('file', 'buffer/{{ ansible_hostname }}-id_rsa.pub') }}"
become: true
- name: Clean up buffer dir
ansible.builtin.file:
path: buffer
state: absent
delegate_to: localhost
## install forgejo container
- name: Check upstream status
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: true
notify: "update forgejo upstream"
- name: Deploy forgejo to swarm
community.general.docker_stack:
name: "{{ stack_name }}"
state: present
prune: yes
compose:
- "{{ stack_compose }}"
become: true
tags:
- docker-swarm
register: forgejo_deployment
notify: "update forgejo upstream"
- name: Wait a minute for forgejo to become healthy
wait_for:
timeout: 55
delegate_to: localhost
when: forgejo_deployment is changed
- name: Get app container info
ansible.builtin.command:
cmd: docker ps -q -f name={{ stack_name }}_app
become: true
until: forgejo_app_container_name['rc'] | default('') == 0 and forgejo_app_container_name['stdout'] | length >= 1
retries: 10
delay: 10
changed_when: False
register: forgejo_app_container_name
- name: Look for existing admin user
community.docker.docker_container_exec:
container: "{{ forgejo_app_container_name['stdout'] }}"
user: git
command: >
forgejo admin user list --admin
until: forgejo_admin_list is defined and forgejo_admin_list['rc'] | default('') == 0
retries: 15
delay: 20
become: true
register: forgejo_admin_list
changed_when: forgejo_admin_list['stdout_lines'] | length <= 1 and 'Username' in forgejo_admin_list['stdout']
notify: "no admin user"

View file

@ -0,0 +1,146 @@
version: '3.4'
services:
app:
image: "{{ stack_image }}:{{ forgejo_version }}"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "127.0.0.1:3000"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
volumes:
- data:/data
- /home/git/.ssh:/data/git/.ssh
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- USER_UID={{ git_user['uid'] }}
- USER_GID={{ git_user['group'] }}
- FORGEJO__database__DB_TYPE=postgres
- FORGEJO__database__HOST=db:5432
- "FORGEJO__database__NAME={{ forgejo_db_database }}"
- "FORGEJO__database__USER={{ forgejo_db_username }}"
- "FORGEJO__database__PASSWD={{ forgejo_db_password }}"
- "FORGEJO__server__ROOT_URL={{ (forgejo_use_https == True) | ternary('https', 'http') }}://{{ (subdomain_alias is not undefined and not none) | ternary(subdomain_alias, stack_name) }}.{{server_domain}}"
- "FORGEJO__server__SSH_DOMAIN={{ (subdomain_alias is not undefined and not none) | ternary(subdomain_alias, stack_name) }}.{{server_domain}}"
- FORGEJO__server__LANDING_PAGE=explore
- FORGEJO__service__DISABLE_REGISTRATION=true
{% if forgejo_app_admin_username is not undefined and not None and forgejo_app_admin_password is not undefined and not None %}
- FORGEJO__security__INSTALL_LOCK=true
{% endif %}
{% if forgejo_smtp_addr is not undefined and not None and forgejo_smtp_port is not undefined and not None and forgejo_smtp_username is not undefined and not None and forgejo_smtp_password is not undefined and not None %}
- FORGEJO__mailer__ENABLED=true
- FORGEJO__service__ENABLE_NOTIFY_MAIL=true
- FORGEJO__mailer__FROM=forgejo@{{ server_domain }}
- FORGEJO__mailer__TYPE=smtp
- FORGEJO__mailer__SMTP_ADDR={{ forgejo_smtp_addr }}
- FORGEJO__mailer__SMTP_PORT={{ forgejo_smtp_port }}
{% if forgejo_smtp_protocol is not undefined and not none %}
- FORGEJO__mailer__PROTOCOL={{ forgejo_smtp_protocol }}
{% endif %}
- FORGEJO__mailer__USER={{ forgejo_smtp_username }}
- FORGEJO__mailer__PASSWD={{ forgejo_smtp_password }}
{% endif %}
{% if forgejo_use_lfs %}
- FORGEJO__server__LFS_START_SERVER=true
{% if forgejo_lfs_jwt_secret is not undefined and not none %}
- FORGEJO__server__LFS_JWT_SECRET={{ forgejo_lfs_jwt_secret }}
{% endif %}
- FORGEJO__server__LFS_HTTP_AUTH_EXPIRY={{ forgejo_lfs_http_auth_expiry }}
- FORGEJO__server__LFS_MAX_FILE_SIZE={{ forgejo_lfs_max_filesize }}
{% endif %}
{% if forgejo_use_s3 %}
- FORGEJO__storage__STORAGE_TYPE="minio"
- FORGEJO__storage__MINIO_USE_SSL={{ forgejo_s3_use_ssl }}
- FORGEJO__storage__MINIO_BUCKET_LOOKUP={{ forgejo_s3_bucket_lookup }}
- FORGEJO__storage__MINIO_ENDPOINT={{ forgejo_s3_endpoint }}
- FORGEJO__storage__MINIO_ACCESS_KEY_ID={{ forgejo_s3_key }}
- FORGEJO__storage__MINIO_SECRET_ACCESS_KEY={{ forgejo_s3_secret }}
- FORGEJO__storage__MINIO_BUCKET={{ forgejo_s3_bucket }}
- FORGEJO__storage__MINIO_LOCATION={{ forgejo_s3_region }}
- FORGEJO__storage__MINIO_CHECKSUM_ALGORITHM={{ forgejo_s3_checksum }}
{% endif %}
networks:
- "{{ docker_swarm_public_network_name }}"
- backend
ports:
- "127.0.0.1:2222:22"
db:
image: postgres:13
healthcheck:
test: ["CMD", "pg_isready", "-q", "-U", "{{ forgejo_db_username }}"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
environment:
- POSTGRES_USER={{ forgejo_db_username }}
- POSTGRES_PASSWORD={{ forgejo_db_password }}
- POSTGRES_DB={{ forgejo_db_database }}
{% if forgejo_use_ci %}
wp-server:
image: woodpeckerci/woodpecker-server:v3
networks:
- "{{ docker_swarm_public_network_name }}"
- backend
volumes:
- woodpecker:/var/lib/woodpecker/
environment:
- WOODPECKER_OPEN=true
- "WOODPECKER_HOST={{ (forgejo_use_https == True) | ternary('https', 'http') }}://{{ (subdomain_ci_alias is not undefined and not none) | ternary(subdomain_ci_alias, stack_name + '_ci') }}.{{server_domain}}"
- WOODPECKER_AGENT_SECRET={{ forgejo_woodpecker_agent_secret }}
{% if forgejo_ci_github_client is not undefined and not None and forgejo_ci_github_secret is not undefined and not None %}
- WOODPECKER_GITHUB=true
- WOODPECKER_GITHUB_CLIENT={{ forgejo_ci_github_client }}
- WOODPECKER_GITHUB_SECRET={{ forgejo_ci_github_secret }}
{% endif %}
{% if forgejo_ci_gitlab_client is not undefined and not None and forgejo_ci_gitlab_secret is not undefined and not None %}
- WOODPECKER_GITLAB=true
- WOODPECKER_GITLAB_CLIENT={{ forgejo_ci_gitlab_client }}
- WOODPECKER_GITLAB_SECRET={{ forgejo_ci_gitlab_secret }}
{% endif %}
{% if forgejo_ci_forgejo_client is not undefined and not None and forgejo_ci_forgejo_secret is not undefined and not None %}
- WOODPECKER_FORGEJO=true
- "WOODPECKER_FORGEJO_URL={{ (forgejo_use_https == True) | ternary('https', 'http') }}://{{ (subdomain_alias is not undefined and not none) | ternary(subdomain_alias, stack_name) }}.{{server_domain}}"
- WOODPECKER_FORGEJO_CLIENT={{ forgejo_ci_forgejo_client }}
- WOODPECKER_FORGEJO_SECRET={{ forgejo_ci_forgejo_secret }}
{% endif %}
{% if forgejo_ci_gitea_url is not undefined and not None and forgejo_ci_gitea_client is not undefined and not None and forgejo_ci_gitea_secret is not undefined and not None %}
- WOODPECKER_GITEA=true
- "WOODPECKER_GITEA_URL={{ (forgejo_use_https == True) | ternary('https', 'http') }}://{{ (subdomain_alias is not undefined and not none) | ternary(subdomain_alias, stack_name) }}.{{server_domain}}"
- WOODPECKER_GITEA_CLIENT={{ forgejo_ci_gitea_client }}
- WOODPECKER_GITEA_SECRET={{ forgejo_ci_gitea_secret }}
{% endif %}
wp-agent:
image: woodpeckerci/woodpecker-agent:v3
networks:
- backend
command: agent
volumes:
- woodpecker-agent-config:/etc/woodpecker
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WOODPECKER_SERVER=wp-server:9000
- WOODPECKER_AGENT_SECRET={{ forgejo_woodpecker_agent_secret }}
{% endif %}
volumes:
data:
db:
woodpecker:
woodpecker-agent-config:
networks:
"{{ docker_swarm_public_network_name }}":
external: true
backend:

View file

@ -0,0 +1,39 @@
{
"@id": "{{ stack_name }}_ci_upstream",
{% if server_domain is not undefined and not none %}
"match": [
{
"host": [
{% if subdomain_ci_alias is not undefined and not none %}
"{{ subdomain_ci_alias }}.{{ server_domain }}"
{% else %}
"{{ stack_name }}_ci.{{ server_domain }}"
{% endif %}
]
}
],
{% else %}
"match": [
{
"path": [
{% if subdomain_ci_alias is not undefined and not none %}
"/{{ subdomain_ci_alias }}*"
{% else %}
"/{{ stack_name }}_ci*"
{% endif %}
]
}
],
{% endif %}
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "{{ stack_name }}_wp-server:8000"
}
]
}
]
}

View file

@ -0,0 +1,8 @@
---
stack_name: forgejo
stack_image: "codeberg.org/forgejo/forgejo"
stack_compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
forgejo_git_username: git

View file

@ -1,41 +0,0 @@
# gitea
A relatively light-weight git server hosting.
## Defaults
```
gitea_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
```
The on-target directory where the proxy configuration file should be stashed.
```
gitea_use_https: true
```
Whether the service should be reachable through http (port 80) or through https (port 443) and provision an https certificate. Usually you will want this to stay `true`.
```
gitea_version: latest
```
The docker image version to be used in stack creation.
```
subdomain_alias: git
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `git.yourdomain.com` -
if this option is not set it will be served on `gitea.yourdomain.com` instead.
For now gitea will still need to be initially set up after installation.
This could be automated with the help of these commands:
```sh
docker run --name gitea -p 8080:3000 -e GITEA__security__INSTALL_LOCK=true -d gitea/gitea:1.14.2
$ docker exec gitea migrate
$ docker exec gitea gitea admin user create --admin --username root --password admin1234 --email admin@example.com
```

View file

@ -1,24 +0,0 @@
---
# never got around to removing the master tag from the images
gitea_version: latest
gitea_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
gitea_use_https: true
# the subdomain link gitea will be reachable under
subdomain_alias: git
gitea_db_database: gitea
gitea_db_username: gitea
gitea_db_password: gitea
gitea_app_admin_username: Mygiteausername # can not be set to admin in Gitea
gitea_app_admin_password: Mygiteapassword
gitea_app_admin_email: myadmin@mydomain.mytld
# gitea_smtp_host: domain.com:port
# gitea_smtp_username: my@username.com
# gitea_smtp_password: <password>
# gitea_smtp_force_tls: false # forces tls if it is on a non-traditional tls port. Overwrites starttls so should generally be off

View file

@ -1,62 +0,0 @@
- name: Add admin user
community.docker.docker_container_exec:
container: "{{ gitea_app_container_name['stdout'] }}"
command: >
gitea admin user create --admin --username {{ gitea_app_admin_username }} --password {{ gitea_app_admin_password }} --email {{ gitea_app_admin_email }}
become: yes
listen: "no admin user"
## Register reverse proxy
- name: Ensure upstream directory exists
ansible.builtin.file:
path: "{{ gitea_upstream_file_dir }}"
state: directory
mode: '0755'
become: yes
listen: "update gitea upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ gitea_upstream_file_dir }}/upstream.json"
mode: '0600'
become: yes
listen: "update gitea upstream"
# figure out if upstream id exists
- name: check {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: yes
listen: "update gitea upstream"
# upstream already exists, patch it
- name: remove old {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: yes
when: (result.stdout | from_json)['error'] is not defined
listen: "update gitea upstream"
# upstream has to be created
- name: add {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ gitea_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (gitea_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: yes
listen: "update gitea upstream"
- name: Ensure upstream directory is gone again
ansible.builtin.file:
path: "{{ gitea_upstream_file_dir }}"
state: absent
become: yes
listen: "update gitea upstream"

View file

@ -1,95 +0,0 @@
---
- name: Ensure git user exists with ssh key
ansible.builtin.user:
name: "{{ gitea_git_username }}"
generate_ssh_key: yes
ssh_key_type: rsa
ssh_key_bits: 4096
ssh_key_comment: "Gitea Host Key"
become: yes
register: git_user
- name: Ensure git passthrough command directory exists
ansible.builtin.file:
path: "/app/gitea/"
state: directory
mode: '0770'
owner: "{{ git_user['uid'] }}"
group: "{{ git_user['group'] }}"
become: yes
- name: Save git passthrough command in right location
ansible.builtin.copy:
src: gitea
dest: "/app/gitea/gitea"
owner: "{{ git_user['uid'] }}"
group: "{{ git_user['group'] }}"
mode: '0750'
become: yes
- name: Fetch keyfile
fetch:
src: "{{ git_user['home'] }}/.ssh/id_rsa.pub"
dest: "buffer/{{ansible_hostname}}-id_rsa.pub"
flat: yes
become: yes
- name: Ensure git user has its own key authorized for access
ansible.posix.authorized_key:
user: "{{ git_user['name'] }}"
state: present
key: "{{ lookup('file', 'buffer/{{ ansible_hostname }}-id_rsa.pub') }}"
become: yes
- name: Clean up buffer dir
ansible.builtin.file:
path: buffer
state: absent
delegate_to: localhost
## install gitea container
- name: Check upstream status
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: yes
notify: "update gitea upstream"
- name: Deploy gitea to swarm
community.general.docker_stack:
name: "{{ stack_name }}"
state: present
prune: yes
compose:
- "{{ stack_compose }}"
become: yes
tags:
- docker-swarm
notify: "update gitea upstream"
- name: Get app container info
ansible.builtin.command:
cmd: docker ps -q -f name={{ stack_name }}_app
become: yes
until: gitea_app_container_name['rc'] == 0 and gitea_app_container_name['stdout'] | length >= 1
retries: 5
delay: 10
changed_when: False
register: gitea_app_container_name
- name: Look for existing admin user
community.docker.docker_container_exec:
container: "{{ gitea_app_container_name['stdout'] }}"
command: >
gitea admin user list --admin
become: yes
until: "'connection refused' not in gitea_admin_list and 'Failed to run app' not in gitea_admin_list"
retries: 5
delay: 10
changed_when: gitea_admin_list['stdout_lines'] | length <= 1
failed_when: gitea_admin_list['rc'] == 1 and gitea_admin_list['attempts'] >= 5
register: gitea_admin_list
notify: "no admin user"

View file

@ -1,68 +0,0 @@
version: '3.4'
services:
app:
image: "{{ stack_image }}:{{ gitea_version }}"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost:3000"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
volumes:
- data:/data
- /home/git/.ssh:/data/git/.ssh
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
environment:
- USER_UID={{ git_user['uid'] }}
- USER_GID={{ git_user['group'] }}
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME={{ gitea_db_database }}
- GITEA__database__USER={{ gitea_db_username }}
- GITEA__database__PASSWD={{ gitea_db_password }}
- "GITEA__server__ROOT_URL={{ (gitea_use_https == True) | ternary('https', 'http') }}://{{ (subdomain_alias is not undefined and not none) | ternary(subdomain_alias, stack_name) }}.{{server_domain}}"
- "GITEA__server__SSH_DOMAIN={{ server_domain }}"
- GITEA__server__LANDINGPAGE=explore
- GITEA__service__DISABLE_REGISTRATION=true
{% if gitea_app_admin_username is not undefined and not None and gitea_app_admin_password is not undefined and not None %}
- GITEA__security__INSTALL_LOCK=true
{% endif %}
{% if gitea_smtp_host is not undefined and not None and gitea_smtp_username is not undefined and not None and gitea_smtp_password is not undefined and not None %}
- GITEA__mailer__ENABLED=true
- GITEA__service__ENABLE_NOTIFY_MAIL=true
- GITEA__mailer__FROM=gitea@{{ server_domain }}
- GITEA__mailer__TYPE=smtp
- GITEA__mailer__HOST={{ gitea_smtp_host }}
- GITEA__mailer__IS_TLS_ENABLED={{ (gitea_smtp_force_tls is not undefined and not None) | ternary(gitea_smtp_force_tls,'false') }}
- GITEA__mailer__USER={{ gitea_smtp_username }}
- GITEA__mailer__PASSWD={{ gitea_smtp_password }}
{% endif %}
networks:
- "{{ docker_swarm_public_network_name }}"
- backend
ports:
- "127.0.0.1:2222:22"
db:
image: postgres:13
volumes:
- db:/var/lib/postgresql/data
networks:
- backend
environment:
- POSTGRES_USER={{ gitea_db_username }}
- POSTGRES_PASSWORD={{ gitea_db_password }}
- POSTGRES_DB={{ gitea_db_database }}
volumes:
data:
db:
networks:
"{{ docker_swarm_public_network_name }}":
external: true
backend:

View file

@ -1,10 +1,10 @@
# landingpage
The public face of my server.
The public face of my server.
Not much to see here honestly,
just a few simple lines of html explaining what this server is about and how to contact me.
I don't see anybody else benefiting massively from this role but me,
I don't see anybody else benefiting massively from this role but me,
but if you want the same web presence go for it I suppose 😉
## Defaults
@ -31,7 +31,6 @@ The docker image version to be used in stack creation.
subdomain_alias: www
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `www.yourdomain.com` -
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `www.yourdomain.com` -
if this option is not set it will be served on `landingpage.yourdomain.com` instead.

View file

@ -1,11 +1,11 @@
---
# never got around to removing the master tag from the images
landingpage_version: master
landingpage_version: latest
landingpage_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
landingpage_use_https: true
landingpage_autoupdate: true
# the subdomain link landingpage will be reachable under
subdomain_alias: www

View file

@ -3,15 +3,15 @@
ansible.builtin.file:
path: "{{ landingpage_upstream_file_dir }}"
state: directory
mode: '0755'
become: yes
mode: "0755"
become: true
listen: "update landingpage upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ landingpage_upstream_file_dir }}/upstream.json"
become: yes
become: true
listen: "update landingpage upstream"
# figure out if upstream id exists
@ -22,7 +22,7 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: yes
become: true
listen: "update landingpage upstream"
# upstream already exists, patch it
@ -31,7 +31,7 @@
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: yes
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update landingpage upstream"
@ -40,14 +40,13 @@
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ landingpage_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (landingpage_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: yes
curl -X POST -H "Content-Type: application/json" -d @{{ landingpage_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (landingpage_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update landingpage upstream"
- name: Ensure upstream directory is gone again
ansible.builtin.file:
path: "{{ landingpage_upstream_file_dir }}"
state: absent
become: yes
become: true
listen: "update landingpage upstream"

View file

@ -1,14 +1,11 @@
---
galaxy_info:
author: Marty Oehme
description: Installs my personal public facing landing page as a docker stack service
license: GPL-3.0-only
min_ansible_version: 2.9
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker
- docker-swarm
- caddy
- caddy_id

View file

@ -7,7 +7,7 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: yes
become: true
notify: "update landingpage upstream"
- name: Deploy landingpage to swarm
@ -17,8 +17,7 @@
prune: yes
compose:
- "{{ stack_compose }}"
become: yes
become: true
tags:
- docker-swarm
notify: "update landingpage upstream"

View file

@ -4,7 +4,7 @@ services:
app:
image: "{{ stack_image }}:{{ landingpage_version }}"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "localhost"]
test: ["CMD", "wget", "--spider", "-q", "127.0.0.1"]
interval: 1m
timeout: 10s
retries: 3
@ -12,6 +12,11 @@ services:
entrypoint: sh -c "/docker-entrypoint.sh nginx -g 'daemon off;'"
networks:
- "{{ docker_swarm_public_network_name }}"
{% if landingpage_autoupdate is defined and landingpage_autoupdate %}
deploy:
labels:
- shepherd.autoupdate=true
{% endif %}
networks:
"{{ docker_swarm_public_network_name }}":

View file

@ -1,7 +1,6 @@
---
stack_name: landingpage
stack_image: "registry.gitlab.com/cloud-serve/landing"
stack_image: "ghcr.io/marty-oehme/page"
stack_compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"

View file

@ -0,0 +1,19 @@
---
linkding_version: latest-plus # plus contains self-archiving possibilities with singlefile
linkding_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
linkding_use_https: true
linkding_autoupdate: true
# the subdomain link linkding will be reachable under
subdomain_alias: links
# initial superuser creation
linkding_username: linkdinger
linkding_password: linkdingerpass123
# should we back up the data?
linkding_backup_enable: true
linkding_backup_cron: 0 45 3 * * *

View file

@ -1,18 +1,18 @@
## Register reverse proxy
- name: Ensure upstream directory exists
ansible.builtin.file:
path: "{{ blog_upstream_file_dir }}"
path: "{{ linkding_upstream_file_dir }}"
state: directory
mode: '0755'
become: yes
listen: "update blog upstream"
mode: "0755"
become: true
listen: "update linkding upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ blog_upstream_file_dir }}/upstream.json"
become: yes
listen: "update blog upstream"
dest: "{{ linkding_upstream_file_dir }}/upstream.json"
become: true
listen: "update linkding upstream"
# figure out if upstream id exists
- name: check {{ stack_name }} upstream
@ -22,8 +22,8 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: yes
listen: "update blog upstream"
become: true
listen: "update linkding upstream"
# upstream already exists, patch it
- name: remove old {{ stack_name }} upstream
@ -31,23 +31,22 @@
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: yes
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update blog upstream"
listen: "update linkding upstream"
# upstream has to be created
- name: add {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ blog_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (blog_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: yes
listen: "update blog upstream"
curl -X POST -H "Content-Type: application/json" -d @{{ linkding_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (linkding_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update linkding upstream"
- name: Ensure upstream directory is gone again
ansible.builtin.file:
path: "{{ blog_upstream_file_dir }}"
path: "{{ linkding_upstream_file_dir }}"
state: absent
become: yes
listen: "update blog upstream"
become: true
listen: "update linkding upstream"

View file

@ -0,0 +1,11 @@
---
galaxy_info:
author: Marty Oehme
description: Installs linkding as a docker stack service
license: GPL-3.0-only
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker-swarm
- caddy_id

View file

@ -1,5 +1,5 @@
---
## install blog container
## install linkding container
- name: Check upstream status
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
@ -7,18 +7,17 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: yes
notify: "update blog upstream"
become: true
notify: "update linkding upstream"
- name: Deploy blog to swarm
- name: Deploy linkding to swarm
community.general.docker_stack:
name: "{{ stack_name }}"
state: present
prune: yes
compose:
- "{{ stack_compose }}"
become: yes
become: true
tags:
- docker-swarm
notify: "update blog upstream"
notify: "update linkding upstream"

View file

@ -0,0 +1,46 @@
services:
app:
image: "{{ stack_image }}:{{ linkding_version }}"
healthcheck:
test: ["CMD", "curl", "--fail", "http://127.0.0.1:9090/health"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
networks:
- "{{ docker_swarm_public_network_name }}"
volumes:
- data:/etc/linkding/data
environment:
- "LD_SUPERUSER_NAME={{ linkding_username }}"
- "LD_SUPERUSER_PASSWORD={{ linkding_password }}"
{% if linkding_autoupdate is defined and linkding_autoupdate %}
deploy:
labels:
- shepherd.autoupdate=true
{% endif %}
{% if backup_enable is not undefined and not false and linkding_backup_enable is not undefined and not false %}
backup:
image: mazzolino/restic
environment:
- "TZ={{ restic_timezone }}"
# go-cron starts w seconds
- "BACKUP_CRON={{ linkding_backup_cron }}"
- "RESTIC_REPOSITORY={{ restic_repo }}"
- "AWS_ACCESS_KEY_ID={{ restic_s3_key }}"
- "AWS_SECRET_ACCESS_KEY={{ restic_s3_secret }}"
- "RESTIC_PASSWORD={{ restic_pass }}"
- "RESTIC_BACKUP_TAGS=linkding"
- "RESTIC_BACKUP_SOURCES=/volumes"
volumes:
- data:/volumes/linkding_data:ro
{% endif %}
volumes:
data:
networks:
"{{ docker_swarm_public_network_name }}":
external: true

View file

@ -0,0 +1,38 @@
{
"@id": "{{ stack_name }}_upstream",
{% if server_domain is not undefined and not none %}
"match": [
{
"host": [
{% if subdomain_alias is not undefined and not none %}
"{{ subdomain_alias }}.{{ server_domain }}"
{% else %}
"{{ stack_name }}.{{ server_domain }}"
{% endif %}
]
}
],
{% else %}
"match": [
{
"path": [
{% if subdomain_alias is not undefined and not none %}
"/{{ subdomain_alias }}*"
{% else %}
"/{{ stack_name }}*"
{% endif %}
]
}
],
{% endif %}
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "{{ stack_name }}_app:9090"
}
]
}
]
}

View file

@ -0,0 +1,6 @@
---
stack_name: linkding
stack_image: "ghcr.io/sissbruecker/linkding"
stack_compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"

View file

@ -27,6 +27,6 @@ The docker image version to be used in stack creation.
subdomain_alias: rss
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `rss.yourdomain.com` -
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `rss.yourdomain.com` -
if this option is not set it will be served on `miniflux.yourdomain.com` instead.

View file

@ -1,5 +1,4 @@
---
miniflux_version: latest
miniflux_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
@ -9,6 +8,8 @@ miniflux_use_https: true
# the subdomain link miniflux will be reachable under
subdomain_alias: rss
miniflux_autoupdate: true
# Should ideally be overwritten in encrypted group/host vars
miniflux_admin_username: myadmin
miniflux_admin_password: mypassword

View file

@ -3,15 +3,15 @@
ansible.builtin.file:
path: "{{ miniflux_upstream_file_dir }}"
state: directory
mode: '0755'
become: yes
mode: "0755"
become: true
listen: "update miniflux upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ miniflux_upstream_file_dir }}/upstream.json"
become: yes
become: true
listen: "update miniflux upstream"
# figure out if upstream id exists
@ -22,7 +22,7 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: yes
become: true
listen: "update miniflux upstream"
# upstream already exists, patch it
@ -31,7 +31,7 @@
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: yes
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update miniflux upstream"
@ -40,14 +40,13 @@
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ miniflux_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (miniflux_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: yes
curl -X POST -H "Content-Type: application/json" -d @{{ miniflux_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (miniflux_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update miniflux upstream"
- name: Ensure upstream directory is gone again
ansible.builtin.file:
path: "{{ miniflux_upstream_file_dir }}"
state: absent
become: yes
become: true
listen: "update miniflux upstream"

View file

@ -1,14 +1,11 @@
---
galaxy_info:
author: Marty Oehme
description: Installs miniflux as a docker stack service
license: GPL-3.0-only
min_ansible_version: 2.9
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker
- docker-swarm
- caddy
- caddy_id

View file

@ -7,7 +7,7 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: yes
become: true
notify: "update miniflux upstream"
- name: Deploy miniflux to swarm
@ -17,8 +17,7 @@
prune: yes
compose:
- "{{ stack_compose }}"
become: yes
become: true
tags:
- docker-swarm
notify: "update miniflux upstream"

View file

@ -24,6 +24,11 @@ services:
{% else %}
- "BASE_URL={{ (miniflux_use_https == True) | ternary('https', 'http') }}://localhost/{{ (subdomain_alias is not undefined and not none) | ternary(subdomain_alias, stack_name) }}"
{% endif %}
{% if miniflux_autoupdate is defined and miniflux_autoupdate %}
deploy:
labels:
- shepherd.autoupdate=true
{% endif %}
db:
image: postgres:11

View file

@ -1,5 +1,4 @@
---
stack_name: miniflux
stack_image: "miniflux/miniflux"

View file

@ -27,8 +27,8 @@ The docker image version to be used in stack creation.
subdomain_alias: prm
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `prm.yourdomain.com` (personal relationship manager) -
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `prm.yourdomain.com` (personal relationship manager) -
if this option is not set it will be served on `monica.yourdomain.com` instead.
```
@ -38,14 +38,14 @@ monica_db_password: mymonicadbpassword
```
Set the default username and password combination on first container start.
If loading from an existing volume this does nothing, otherwise it sets the
If loading from an existing volume this does nothing, otherwise it sets the
first user so you can instantly log in.
```
monica_app_disable_signups: true
```
Sets the behavior on the login screen ---
Sets the behavior on the login screen ---
if set to true (default) will not let anyone but the first user sign up,
who automatically becomes an administrative user.
If set to false will allow multiple users to sign up on the instance.
@ -57,13 +57,13 @@ monica_app_weather_api_key: <your-darksky-key>
If `monica_app_geolocation_api_key` is set, Monica will translate addresses
input into the app to geographical latitude/ longitude data.
It requires an api key from https://locationiq.com/, which are free for
It requires an api key from https://locationiq.com/, which are free for
10.000 daily requests.
Similarly, if `monica_app_weather_api_key` is set, monica will (afaik) show
weather data for the location of individual contacts.
Similarly, if `monica_app_weather_api_key` is set, monica will (afaik) show
weather data for the location of individual contacts.
It requires an API key from https://darksky.net/dev/register, where
1.000 daily requests are free.
1.000 daily requests are free.
Be aware, however, that since darksky's sale to Apple, no new API signups are possible.
To use this feature, `monica_app_geolocation_api_key` must also be filled out.
@ -71,8 +71,8 @@ To use this feature, `monica_app_geolocation_api_key` must also be filled out.
monica_mail_host: smtp.eu.mailgun.org
monica_mail_port: 465
monica_mail_encryption: tls
monica_mail_username:
monica_mail_password:
monica_mail_username:
monica_mail_password:
monica_mail_from: monica@yourserver.com
monica_mail_from_name: Monica
monica_mail_new_user_notification_address: "{{ caddy_email }}"
@ -81,5 +81,5 @@ monica_mail_new_user_notification_address: "{{ caddy_email }}"
Sets up the necessary details for Monica to send out registration and reminder e-mails.
Requires an smtp server set up, most easily doable through things like mailgun or sendgrid.
Variables should be relatively self-explanatory,
with `monica_mail_new_user_notification_address` being the address the notifications should be sent *to*,
with `monica_mail_new_user_notification_address` being the address the notifications should be sent _to_,
so in all probability some sort of administration address.

View file

@ -1,5 +1,4 @@
---
monica_version: latest
monica_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
@ -19,8 +18,8 @@ monica_db_password: mymonicadbpassword
#monica_app_weather_api_key:
#monica_mail_host: smtp.eu.mailgun.org
#monica_mail_username:
#monica_mail_password:
#monica_mail_username:
#monica_mail_password:
monica_mail_port: 465
monica_mail_encryption: tls
#monica_mail_from: monica@yourserver.com

View file

@ -3,15 +3,15 @@
ansible.builtin.file:
path: "{{ monica_upstream_file_dir }}"
state: directory
mode: '0755'
become: yes
mode: "0755"
become: true
listen: "update monica upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ monica_upstream_file_dir }}/upstream.json"
become: yes
become: true
listen: "update monica upstream"
# figure out if upstream id exists
@ -22,7 +22,7 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: yes
become: true
listen: "update monica upstream"
# upstream already exists, patch it
@ -31,7 +31,7 @@
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: yes
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update monica upstream"
@ -40,14 +40,13 @@
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ monica_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (monica_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: yes
curl -X POST -H "Content-Type: application/json" -d @{{ monica_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (monica_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update monica upstream"
- name: Ensure upstream directory is gone again
ansible.builtin.file:
path: "{{ monica_upstream_file_dir }}"
state: absent
become: yes
become: true
listen: "update monica upstream"

View file

@ -1,14 +1,11 @@
---
galaxy_info:
author: Marty Oehme
description: Installs monica as a docker stack service
license: GPL-3.0-only
min_ansible_version: 2.9
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker
- docker-swarm
- caddy
- caddy_id

View file

@ -4,9 +4,8 @@
ansible.builtin.package:
name: "openssl"
state: present
become: yes
become: true
tags:
- apt
- download
- packages

View file

@ -12,8 +12,7 @@
ansible.builtin.shell: echo -n 'base64:'; openssl rand -base64 32
register: monica_app_key
- set_fact:
monica_app_key={{ monica_app_key.stdout }}
- set_fact: monica_app_key={{ monica_app_key.stdout }}
## install container
- name: Check upstream status
@ -23,7 +22,7 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: yes
become: true
notify: "update monica upstream"
- name: Deploy to swarm
@ -33,8 +32,7 @@
prune: yes
compose:
- "{{ stack_compose }}"
become: yes
become: true
tags:
- docker-swarm
notify: "update monica upstream"

View file

@ -1,5 +1,4 @@
---
stack_name: monica
stack_image: "monica"

View file

@ -4,13 +4,14 @@ A full office suite and groupware proposition,
though its main draw for most is the file synchronization abilities.
AKA Dropbox replacement.
This software can grow enormous and enormously complicated,
This software can grow enormous and enormously complicated,
this Ansible setup role concentrates on 3 things:
* a stable and secure base setup from the official docker container
* automatic setup of an email pipeline so users can reset passwords and be updated of changes
* the ability to use S3 object storage as the primary way of storing users' files
The rest should be taken care of either automatically,
- a stable and secure base setup from the official docker container
- automatic setup of an email pipeline so users can reset passwords and be updated of changes
- the ability to use S3 object storage as the primary way of storing users' files
The rest should be taken care of either automatically,
or supplied after the fact (if using different plugins or similar).
## Defaults
@ -32,7 +33,7 @@ nextcloud_version: fpm
nextcloud_db_version: 12
```
The docker image version to be used in stack creation.
The docker image version to be used in stack creation.
The role sets up the `php-fpm` version of the official Nextcloud image.
That means, Caddy is used in front as the server which presents all pages
and access to files, the Nextcloud image itself only serves as the PHP data store.
@ -41,17 +42,17 @@ If changing the version to one relying on Nextcloud's in-built Apache server,
take care to change where the upstream proxy is pointing to since the Caddy server in front loses its meaning.
The second variable points to the docker image that should be used for the PostgreSQL database,
with 12 pre-filled as default.
with 12 pre-filled as default.
You can put this to latest, but should take care to migrate the database correctly when an update rolls around,
or it *will* destroy your data at some point.
or it _will_ destroy your data at some point.
Generally, it seems easier to pin this to a specific version and then only update manually.
```yml
subdomain_alias: files
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `files.yourdomain.com` -
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `files.yourdomain.com` -
if this option is not set it will be served on `nextcloud.yourdomain.com` instead.
If you change or delete this, you should also change what `nextcloud_trusted_domains` points to.
@ -66,7 +67,7 @@ nextcloud_db_password: secretnextcloud
```
Sets the default username and password for application and database.
All of these variables are necessary to circumvent the manual installation process
All of these variables are necessary to circumvent the manual installation process
you would usually be faced with on first creating a Nextcloud instance.
Ideally change all of these for your personal setup,
but it is especially important to change the app admin login data since they are what is public facing.
@ -77,7 +78,7 @@ nextcloud_trusted_domains: "{{ subdomain_alias }}.{{ server_domain }}"
The domains that are allowed to access your Nextcloud instance.
Should point to any domains that you want it accessible on,
can be a space-separated list of them.
can be a space-separated list of them.
Take care to include the sub-domain if your are accessing it through one of them.
[Further explanation](https://blog.martyoeh.me/posts/2021-11-18-nextcloud-trusted-domains/).
@ -130,7 +131,6 @@ If your details are correct, Nextcloud should automatically set up S3 as its pri
Be careful if you switch an existing data volume of the Nextcloud image to S3
as you will lose all access to existing files.
The files *should* not be deleted at this point,
The files _should_ not be deleted at this point,
only access will be lost,
but you are playing with fire at this point.

View file

@ -1,9 +1,8 @@
---
# set preferred application version
nextcloud_version: fpm-alpine
nextcloud_version: 30-fpm-alpine
# set preferred postgres version
nextcloud_db_version: 12-alpine
nextcloud_db_version: 16-alpine
nextcloud_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
@ -19,6 +18,13 @@ nextcloud_redis_password: myredispass
nextcloud_db_username: nextcloud
nextcloud_db_password: secretnextcloud
# run restic backups
nextcloud_backup_enable: true
nextcloud_backup_cron: 0 30 3 * * *
nextcloud_php_memory_limit: 5G # maximum ram php may use
nextcloud_php_upload_limit: 15G # maximum size of (web) uploaded files
# if you wish to access your nextcloud instance from the reverse proxy
nextcloud_trusted_domains: "{{ subdomain_alias }}.{{ server_domain }}"
@ -31,7 +37,6 @@ nextcloud_smtp_authtype: LOGIN
# nextcloud_smtp_password: <smtp-password>
nextcloud_smtp_from_address: noreply
nextcloud_smtp_from_domain: "{{ server_domain }}"
# the following block is required *fully* for primary object storage
# nextcloud_s3_host: s3.eu-central-1.wasabisys.com
# nextcloud_s3_bucket: nextcloud
@ -41,4 +46,3 @@ nextcloud_smtp_from_domain: "{{ server_domain }}"
# nextcloud_s3_ssl: true
# nextcloud_s3_region: eu-central-1
# nextcloud_s3_usepath_style: true

View file

@ -1,15 +1,35 @@
:80 {
root * /var/www/html
file_server
{
servers {
trusted_proxies static 10.0.0.0/8
}
}
:80 {
encode zstd gzip
root * /var/www/html
php_fastcgi app:9000
header {
# enable HSTS
Strict-Transport-Security max-age=31536000;
Strict-Transport-Security max-age=31536000;includeSubDomains;preload;
Permissions-Policy interest-cohort=()
X-Content-Type-Options nosniff
X-Frame-Options SAMEORIGIN
Referrer-Policy no-referrer
X-XSS-Protection "1; mode=block"
X-Permitted-Cross-Domain-Policies none
X-Robots-Tag "noindex, nofollow"
}
# client support (e.g. os x calendar / contacts)
redir /.well-known/carddav /remote.php/dav 301
redir /.well-known/caldav /remote.php/dav 301
redir /.well-known/webfinger /index.php/.well-known/webfinger 301
redir /.well-known/nodeinfo /index.php/.well-known/nodeinfo 301
# Uncomment this block if you use the high speed files backend: https://github.com/nextcloud/notify_push
#handle_path /push/* {
# reverse_proxy unix//run/notify_push/notify_push.sock # I love Unix sockets, but you can do :7867 also
#}
# .htaccess / data / config / ... shouldn't be accessible from outside
@forbidden {
@ -25,8 +45,36 @@
path /occ
path /console.php
}
handle @forbidden {
respond 404
}
respond @forbidden 404
handle {
root * /var/www/html
php_fastcgi app:9000 {
# Tells nextcloud to remove /index.php from URLs in links
env front_controller_active true
env modHeadersAvailable true # Avoid sending the security headers twice
}
}
# From .htaccess, set cache for versioned static files (cache-busting)
@immutable {
path *.css *.js *.mjs *.svg *.gif *.png *.jpg *.ico *.wasm *.tflite
query v=*
}
header @immutable Cache-Control "max-age=15778463, immutable"
# From .htaccess, set cache for normal static files
@static {
path *.css *.js *.mjs *.svg *.gif *.png *.jpg *.ico *.wasm *.tflite
not query v=*
}
header @static Cache-Control "max-age=15778463"
# From .htaccess, cache fonts for 1 week
@woff2 path *.woff2
header @woff2 Cache-Control "max-age=604800"
file_server
}

View file

@ -3,15 +3,15 @@
ansible.builtin.file:
path: "{{ nextcloud_upstream_file_dir }}"
state: directory
mode: '0755'
become: yes
mode: "0755"
become: true
listen: "update nextcloud upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ nextcloud_upstream_file_dir }}/upstream.json"
become: yes
become: true
listen: "update nextcloud upstream"
# figure out if upstream id exists
@ -22,7 +22,7 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: yes
become: true
listen: "update nextcloud upstream"
# upstream already exists, patch it
@ -31,7 +31,7 @@
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: yes
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update nextcloud upstream"
@ -40,14 +40,13 @@
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ nextcloud_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (nextcloud_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: yes
curl -X POST -H "Content-Type: application/json" -d @{{ nextcloud_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (nextcloud_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update nextcloud upstream"
- name: Ensure upstream directory is gone again
ansible.builtin.file:
path: "{{ nextcloud_upstream_file_dir }}"
state: absent
become: yes
become: true
listen: "update nextcloud upstream"

View file

@ -1,14 +1,11 @@
---
galaxy_info:
author: Marty Oehme
description: Installs nextcloud as a docker stack service
license: GPL-3.0-only
min_ansible_version: 2.9
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker
- docker-swarm
- caddy
- caddy_id

View file

@ -7,23 +7,21 @@
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: yes
become: true
notify: "update nextcloud upstream"
- name: Ensure target directory exists
ansible.builtin.file:
path: "{{ nextcloud_upstream_file_dir }}"
state: directory
mode: '0755'
become: yes
notify: "update nextcloud upstream"
mode: "0755"
become: true
- name: Move webserver Caddyfile to target dir
ansible.builtin.copy:
src: "Caddyfile"
dest: "{{ nextcloud_upstream_file_dir }}/Caddyfile"
become: yes
notify: "update nextcloud upstream"
become: true
- name: Deploy to swarm
community.general.docker_stack:
@ -32,8 +30,6 @@
prune: yes
compose:
- "{{ stack_compose }}"
become: yes
become: true
tags:
- docker-swarm
notify: "update nextcloud upstream"

View file

@ -7,7 +7,7 @@ services:
- backend
- "{{ docker_swarm_public_network_name }}"
healthcheck:
test: ["CMD", "wget", "--quiet", "--spider", "--tries=1", "http://localhost:2019/metrics"]
test: ["CMD", "wget", "--quiet", "--spider", "--tries=1", "http://127.0.0.1:2019/metrics"]
interval: 1m
timeout: 10s
retries: 3
@ -31,7 +31,7 @@ services:
start_period: 5m
# needed for db to be up,
# see https://help.nextcloud.com/t/failed-to-install-nextcloud-with-docker-compose/83681/15
entrypoint: sh -c "while !(nc -z db 5432); do sleep 30; done; /entrypoint.sh php-fpm"
# entrypoint: sh -c "while !(nc -z db 5432); do sleep 30; done; /entrypoint.sh php-fpm"
environment:
- NEXTCLOUD_ADMIN_USER={{ nextcloud_app_admin_username }}
- NEXTCLOUD_ADMIN_PASSWORD={{ nextcloud_app_admin_password }}
@ -41,6 +41,8 @@ services:
- POSTGRES_DB={{ nextcloud_db_username }}
- POSTGRES_USER={{ nextcloud_db_username }}
- POSTGRES_PASSWORD={{ nextcloud_db_password }}
- PHP_MEMORY_LIMIT={{ nextcloud_php_memory_limit }}
- PHP_UPLOAD_LIMIT={{ nextcloud_php_upload_limit }}
{% if nextcloud_trusted_domains is not undefined and not none %}
- NEXTCLOUD_TRUSTED_DOMAINS={{ nextcloud_trusted_domains }}
{% endif %}
@ -140,6 +142,42 @@ services:
networks:
- backend
# from https://okxo.de/speed-up-nextcloud-preview-generation-with-imaginary/
# and https://github.com/nextcloud/all-in-one/tree/main/Containers/imaginary
imaginary:
image: nextcloud/aio-imaginary:latest
environment:
- PORT=9000
healthcheck:
test: ["CMD", "/healthcheck.sh"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
command: -return-size -max-allowed-resolution 222.2 -concurrency 50 -enable-url-source -log-level debug
cap_add:
- CAP_SYS_NICE
networks:
- backend
{% if backup_enable is not undefined and not false and nextcloud_backup_enable is not undefined and not false %}
backup:
image: mazzolino/restic
environment:
- "TZ={{ restic_timezone }}"
# go-cron starts w seconds
- "BACKUP_CRON={{ nextcloud_backup_cron }}"
- "RESTIC_REPOSITORY={{ restic_repo }}"
- "AWS_ACCESS_KEY_ID={{ restic_s3_key }}"
- "AWS_SECRET_ACCESS_KEY={{ restic_s3_secret }}"
- "RESTIC_PASSWORD={{ restic_pass }}"
- "RESTIC_BACKUP_TAGS=nextcloud"
- "RESTIC_BACKUP_SOURCES=/volumes"
volumes:
- db:/volumes/nextcloud_db:ro
- data:/volumes/nextcloud_data:ro
{% endif %}
# metrics:
# image: telegraf
# hostname: "${HOSTNAME:-vmi352583.contaboserver.net}"

View file

@ -1,5 +1,4 @@
---
stack_name: nextcloud
stack_image: "nextcloud"

42
roles/ntfy/README.md Normal file
View file

@ -0,0 +1,42 @@
# ntfy
A self-hosted notifications service.
Can take messages sent to the server through simple POST requests on specific topics and
blasts them out to any subscribed receiver on Android, Web, Commandline, or even in other applications.
Thus can function as a simple cross-platform push message service that fits very well into unix workflows.
## Defaults
```
ntfy_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
```
The on-target directory where the proxy configuration file should be stashed.
```
ntfy_use_https: true
```
Whether the service should be reachable through http (port 80) or through https (port 443) and provision an https certificate.
Usually you will want this to stay `true`,
especially on the public facing web.
```
ntfy_version: latest
```
The docker image version to be used in stack creation.
```
subdomain_alias: push
```
If the deployed container should be served over a uri that is not the stack name.
By default, it will be set to `push.yourdomain.com` -
if this option is not set it will be served on `ntfy.yourdomain.com` instead.
The individual `ntfy` options to be changed are very well described on
[the ntfy documentation](https://ntfy.sh/docs/config/).
Together with the default variables for this role it should be easy to find a good pair of settings.

View file

@ -0,0 +1,19 @@
---
ntfy_version: latest
ntfy_upstream_file_dir: "{{ docker_stack_files_dir }}/{{ stack_name }}"
ntfy_use_https: true
subdomain_alias: push
ntfy_global_topic_limit: 15000
ntfy_visitor_subscription_limit: 30
ntfy_visitor_request_limit_burst: 60
ntfy_visitor_request_limit_replenish: "10s"
ntfy_cache_duration: "12h"
ntfy_attachment_total_size_limit: "5G"
ntfy_attachment_file_size_limit: "15M"
ntfy_attachment_expiry_duration: "5h"
ntfy_visitor_attachment_total_size_limit: "500M"
ntfy_visitor_attachment_daily_bandwidth_limit: "1G"

View file

@ -0,0 +1,45 @@
## Register reverse proxy
- name: Ensure upstream directory exists
ansible.builtin.file:
path: "{{ ntfy_upstream_file_dir }}"
state: directory
mode: "0755"
become: true
listen: "update ntfy upstream"
- name: Update upstream template
ansible.builtin.template:
src: upstream.json.j2
dest: "{{ ntfy_upstream_file_dir }}/upstream.json"
become: true
listen: "update ntfy upstream"
# figure out if upstream id exists
- name: check {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/id/{{ stack_name }}_upstream/
changed_when: False
register: result
become: true
listen: "update ntfy upstream"
# upstream already exists, patch it
- name: remove old {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X DELETE localhost:2019/id/{{ stack_name }}_upstream/
become: true
when: (result.stdout | from_json)['error'] is not defined
listen: "update ntfy upstream"
# upstream has to be created
- name: add {{ stack_name }} upstream
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl -X POST -H "Content-Type: application/json" -d @{{ ntfy_upstream_file_dir }}/upstream.json localhost:2019/config/apps/http/servers/{{ (ntfy_use_https == True) | ternary(caddy_https_server_name, caddy_http_server_name) }}/routes/0/
become: true
listen: "update ntfy upstream"

11
roles/ntfy/meta/main.yml Normal file
View file

@ -0,0 +1,11 @@
---
galaxy_info:
author: Marty Oehme
description: Installs a self-hosted push notification service through docker-swarm.
license: GPL-3.0-only
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker-swarm
- caddy_id

37
roles/ntfy/tasks/main.yml Normal file
View file

@ -0,0 +1,37 @@
---
- name: Ensure target directory exists
ansible.builtin.file:
path: "{{ ntfy_upstream_file_dir }}"
state: directory
mode: "0755"
become: true
- name: Move ntfy configuration file to target dir
ansible.builtin.template:
src: "server.yml.j2"
dest: "{{ ntfy_upstream_file_dir }}/server.yml"
become: true
notify: "update ntfy upstream"
## install ntfy container
- name: Check upstream status
community.docker.docker_container_exec:
container: "{{ caddy_container_id }}"
command: >
curl localhost:2019/id/{{ stack_name }}_upstream/
register: result
changed_when: (result.stdout | from_json) != (lookup('template', 'upstream.json.j2') | from_yaml)
become: true
notify: "update ntfy upstream"
- name: Deploy ntfy to swarm
community.general.docker_stack:
name: "{{ stack_name }}"
state: present
prune: yes
compose:
- "{{ stack_compose }}"
become: true
tags:
- docker-swarm
notify: "update ntfy upstream"

View file

@ -0,0 +1,27 @@
version: '3.4'
services:
app:
image: "{{ stack_image }}:{{ ntfy_version }}"
healthcheck:
test: ["CMD", "wget", "--spider", "-q", "127.0.0.1"]
interval: 1m
timeout: 10s
retries: 3
start_period: 1m
volumes:
- "{{ ntfy_upstream_file_dir }}/server.yml:/etc/ntfy/server.yml"
- cache:/var/cache/ntfy
networks:
- "{{ docker_swarm_public_network_name }}"
command:
- serve
volumes:
cache:
networks:
"{{ docker_swarm_public_network_name }}":
external: true

View file

@ -0,0 +1,15 @@
base-url: "https://{{ server_domain }}"
global_topic_limit: {{ ntfy_global_topic_limit }}
visitor_subscription_limit: {{ ntfy_visitor_subscription_limit }}
visitor_request_limit_burst: {{ ntfy_visitor_request_limit_burst }}
visitor_request_limit_replenish: "{{ ntfy_visitor_request_limit_replenish }}"
cache-file: "/var/cache/ntfy/cache.db"
cache_duration: "{{ ntfy_cache_duration }}"
attachment-cache-dir: "/var/cache/ntfy/attachments"
attachment_total_size_limit: "{{ ntfy_attachment_total_size_limit }}"
attachment_file_size_limit: "{{ ntfy_attachment_file_size_limit }}"
attachment_expiry_duration: "{{ ntfy_attachment_expiry_duration }}"
visitor_attachment_total_size_limit: "{{ ntfy_visitor_attachment_total_size_limit }}"
visitor_attachment_daily_bandwidth_limit: "{{ ntfy_visitor_attachment_daily_bandwidth_limit }}"
behind-proxy: true # uses 'X-Forwarded-For' Headers for individual visitors
# TODO i believe Caddy does not set the correct X-Forwarded-For header, see whoami container to check

View file

@ -9,8 +9,6 @@
{% else %}
"{{ stack_name }}.{{ server_domain }}"
{% endif %}
,
"{{ server_domain }}"
]
}
],

6
roles/ntfy/vars/main.yml Normal file
View file

@ -0,0 +1,6 @@
---
stack_name: ntfy
stack_image: "binwiederhier/ntfy"
stack_compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"

49
roles/restic/README.md Normal file
View file

@ -0,0 +1,49 @@
# restic
Backup maintenance stack.
Takes care of regularly pruning the backup repository and checking its integrity.
Currently only supports S3 as a backend.
## Defaults
```yaml
restic_timezone: US/Chicago
```
The timezone to be used for the cronjob.
```yaml
restic_version: latest
```
The docker image version to be used in stack creation.
```yaml
restic_repo: s3.eu-central-1.wasabisys.com/myrepo
restic_pass: <restic-pass>
```
The repository url and the restic repository password.
See the restic documentation for more information.
```yaml
restic_s3_key: <s3-key>
restic_s3_secret: <s3-secret>
```
The restic S3 credentials, i.e. the `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`.
```yaml
restic_prune_cron: 0 0 4 * * *
restic_forget_args: --prune --keep-last 14 --keep-daily 2 --keep-weekly 2
```
The default prune and forget cronjob schedule and arguments: Prune the repository every day at 4:00 AM and keep the last 14 snapshots, 2 daily snapshots and 2 weekly snapshots.
```yaml
restic_check_cron: 0 15 5 * * *
restic_check_args: --read-data-subset=5%
```
The default check cronjob schedule and arguments: Check the repository integrity every day at 5:15 AM and in addition to structural checks, read 5 randomly chosen % for a data integrity check.

View file

@ -0,0 +1,14 @@
---
restic_version: latest
# restic_repo: s3.eu-central-1.wasabisys.com/myrepo
# restic_pass: <restic-pass>
# restic_s3_key: <s3-key>
# restic_s3_secret: <s3-secret>
restic_timezone: "{{ server_timezone | default('US/Chicago') }}"
restic_prune_cron: 0 0 4 * * *
restic_forget_args: --prune --keep-last 14 --keep-daily 2 --keep-weekly 2
restic_check_cron: 0 30 4 * * SUN
restic_check_args: --read-data-subset=15%

View file

@ -0,0 +1,10 @@
---
galaxy_info:
author: Marty Oehme
description: Installs a restic-based backup maintenance stack. Only supports S3 atm.
license: GPL-3.0-only
min_ansible_version: "2.9"
galaxy_tags: []
dependencies:
- docker-swarm

View file

@ -0,0 +1,11 @@
---
- name: Deploy restic to swarm
community.general.docker_stack:
name: "{{ stack_name }}"
state: present
prune: yes
compose:
- "{{ stack_compose }}"
become: true
tags:
- docker-swarm

View file

@ -0,0 +1,30 @@
services:
prune:
image: "{{ stack_image }}:{{ restic_version }}"
hostname: docker
environment:
- "TZ={{ restic_timezone }}"
- "SKIP_INIT=true"
- "RUN_ON_STARTUP=true"
# go-cron starts w seconds
- "PRUNE_CRON={{ restic_prune_cron }}"
- "RESTIC_FORGET_ARGS={{ restic_forget_args }}"
- "RESTIC_REPOSITORY={{ restic_repo }}"
- "AWS_ACCESS_KEY_ID={{ restic_s3_key }}"
- "AWS_SECRET_ACCESS_KEY={{ restic_s3_secret }}"
- "RESTIC_PASSWORD={{ restic_pass }}"
check:
image: "{{ stack_image }}:{{ restic_version }}"
hostname: docker
environment:
- "TZ={{ restic_timezone }}"
- "SKIP_INIT=true"
- "RUN_ON_STARTUP=false"
# go-cron starts w seconds
- "CHECK_CRON={{ restic_check_cron }}"
- "RESTIC_CHECK_ARGS={{ restic_check_args }}"
- "RESTIC_REPOSITORY={{ restic_repo }}"
- "AWS_ACCESS_KEY_ID={{ restic_s3_key }}"
- "AWS_SECRET_ACCESS_KEY={{ restic_s3_secret }}"
- "RESTIC_PASSWORD={{ restic_pass }}"

View file

@ -1,9 +1,8 @@
---
stack_name: restic
stack_name: gitea
stack_image: "gitea/gitea"
stack_image: "mazzolino/restic"
stack_compose: "{{ lookup('template', 'docker-stack.yml.j2') | from_yaml }}"
gitea_git_username: git
backup_enable: true

Some files were not shown because too many files have changed in this diff Show more