Creating DE environment 2 - Caddy web server and Authelia
Introduction
In simple terms, the web server is a system that processes requests via the network. I need a web server to properly serve static pages like this blog, web apps, and services web interfaces for example administration tools. My choice is Caddy, a powerful, enterprise-ready, open-source web server with automatic HTTPS written in Go. Caddy is a user-friendly tool. It deals with HTTPS automatically, so additional tools like certbot are not required. Its configuration file - Caddyfile is easy to manage. In addition, it works well with Docker and has good documentation. This is a perfect combination for me.
The Authelia is an open-source authentication and authorization server. It provides 2FA authentication and single sign-on for the services via a handy web UI. It acts as a companion for reverse proxies by allowing, denying, or redirecting requests. I will use it to secure all administration web tools, for example, dask diagnostics dashboard, databases management tool, and cluster monitoring. I choose the Authelia over the caddy-security plugin because it has better documentation and I believe it is a pretty professional project. It can be a dealbreaker in case of security.
This post is the second one in the series because most of the services deployed in the future will have some web UI, so it is important to serve them properly and securely. It will cover deploying Caddy and the Authelia using the docker-compose and Ansible.
Let’s get to work!
Authelia
The Authelia can be deployed as a daemon or via container. After reading the docs I would say that the Authelia is a ‘container first’ software in my opinion. My architecture is also ‘container first’, so it fits perfectly. Integration with Caddy is a new feature, so there are no tutorials yet. However, Caddy is pretty simple and if the user knows it enough the integration with the Authelia should run smoothly. The Authelia is probably primarily deployed in the swarm mode, but I will use it in a standalone way because my stack will use other hosts for the Dask computations only. Every deployment guide uses a docker secret, but they are available only in the swarm mode so I will use standard environment variables. The Authelia requires additional services like Postgres or Redis also. I will bundle everything, including the web server, in one docker-compose file.
Docker-compose
Let’s start with globally defining the two networks:
- authelia-network - network for authelia and dependency services: postgres and redis,
- caddy-network - external network, it will contain all services exposed to the internet.
It just requires adding the following line in the compose:
# composes/caddy-authelia/docker-compose.yaml
version: "3.8"
networks:
authelia-network:
name: authelia-network
caddy-network:
external: true
As you can see the caddy-network is external. There is a simple reason, I need to have this network with the swarm scope, because I will have services running in swarm, for example, the Jupyter. I do not need the overlay driver for this, but there is still no possibility to define the scope of a network in docker-compose (in 2k22!). It is possible from the CLI with the command:
docker network create caddy-network --attachable --scope swarm
It will create a network with default driver - bridge, swarm scope, and enable manual container attachment. This step can be done with Ansible also.
PostgreSQL
The Authelia supports a few storage providers: PostgreSQL, SQLite3, and MySQL. My choice is PostgreSQL because it is recommended for production deployment by the authors. This backend is used for storing preferences, 2FA devices, secrets, logs, etc, so it is an important part of the whole system. I will also set up the different docker PostgreSQL services which will act as a data warehouse for my projects. The instance configured here will act only as Authelia’s backend, with no other use case. This is an advantage of containerized stack. The docker images I usually use for the popular services are the alpine version. They are lightweight and secure. I also expose the 5432 port. I do not use ports
, because there is no need to publish the ports to the host machine. It just needs to be accessible by other services within the network. Volume is mounted to avoid data loss. Environment variables are just standard credentials and database names. The service should be attached to the authelia-network. The health check feature is really useful also. It checks if the database is ready using the pg_isready
command.
postgres:
image: postgres:alpine
container_name: authelia-postgres
expose:
- 5432
volumes:
- ./postgres:/data/db
environment:
POSTGRES_USER: ${POSTGRES_USERNAME}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DATABASE}
networks:
- authelia-network
restart: unless-stopped
healthcheck:
test:
[
"CMD",
"pg_isready",
"-q",
"-d",
"${POSTGRES_DATABASE}",
"-U",
"${POSTGRES_USERNAME}"
]
timeout: 45s
interval: 10s
retries: 10
Redis
The Authelia relies on session cookies to authenticate users. If there is no cookie for the user, it redirects to the login page. The session data should be stored somewhere. The Authelia gives three options: memory, Redis, and Redis sentinel. Redis is recommended for production environments. The image version is alpine, as usual. The Port should be exposed, the volume should be mounted and the service attached to the authelia-network. The only trick here is the usage of the password. Redis by default can run without any authentication. It could make a job here, but the additional security layer is always good. The standard environment variables for running the Redis service with authentication are not available in the default image, but the password can be assigned to the variable and used in the Redis server requirepass command. Now it requires authentication.
redis:
image: redis:alpine
container_name: authelia-redis
expose:
- 6379
volumes:
- ./redis:/data
environment:
REDIS_PASSWORD: ${REDIS_PASSWORD}
networks:
- authelia-network
restart: unless-stopped
command: /bin/sh -c "redis-server --requirepass $$REDIS_PASSWORD"
Authelia
For deploying the Authelia service the most important things are the environment variables. Information stored in them could be also defined in the configuration.yml, but it is more secure to have them as secrets or environment variables. The environments I use are:
- AUTHELIA_JWT_SECRET - the secret used to craft JWT tokens leveraged by the identity verification process (it can be a random string),
- AUTHELIA_NOTIFIER_SMTP_USERNAME - the paired with password username sent for authentication with the SMTP server,
- AUTHELIA_NOTIFIER_SMTP_PASSWORD - the password paired with the username sent for authentication with the SMTP server,
- AUTHELIA_SESSION_SECRET - the secret key used to encrypt session data in Redis,
- AUTHELIA_SESSION_REDIS_PASSWORD - the password for authenticating with Redis,
- AUTHELIA_STORAGE_POSTGRES_DATABASE - the database name on the database server that the assigned user has access to for Authelia,
- AUTHELIA_STORAGE_POSTGRES_USERNAME - the username paired with password to connect to the database,
- AUTHELIA_STORAGE_POSTGRES_PASSWORD - the password paired with username to connect to the database,
- AUTHELIA_STORAGE_ENCRYPTION_KEY - the encryption key used to encrypt data in the database.
The second important thing is the depens_on
parameter. It means that the Authelia container won’t run until the PostgreSQL and Redis are running. Authelia also needs to be included in the caddy-network.
authelia:
image: authelia/authelia:latest
container_name: authelia
expose:
- 9091
depends_on:
- postgres
- redis
volumes:
- ./authelia:/config
environment:
AUTHELIA_JWT_SECRET: ${JWT}
AUTHELIA_NOTIFIER_SMTP_USERNAME: ${SMTP_USERNAME}
AUTHELIA_NOTIFIER_SMTP_PASSWORD: ${SMTP_PASSWORD}
AUTHELIA_SESSION_SECRET: ${SESSION}
AUTHELIA_SESSION_REDIS_PASSWORD: ${REDIS_PASSWORD}
AUTHELIA_STORAGE_POSTGRES_DATABASE: ${POSTGRES_DATABASE}
AUTHELIA_STORAGE_POSTGRES_USERNAME: ${POSTGRES_USERNAME}
AUTHELIA_STORAGE_POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
AUTHELIA_STORAGE_ENCRYPTION_KEY: ${STORAGE_ENCRYPTION_KEY}
networks:
- caddy-network
- authelia-network
restart: unless-stopped
Configuration
All of Authelia’s settings can be configured via the configuration.yml file. It is important for security to prepare this file carefully and don’t mess here. Luckily the documentation is so useful and informative. A lot of configuration options are already passed via environments in docker-compose also.
Theme, server, logging
The first three basic things to configure are a theme, server, and logging. For the theme choose the auto option, so it can be light or dark depending on the current browser theme mode. The Authelia runs as an internal web server. For the containerized environments it should typically be 0.0.0.0 or 127.0.0.1. The Port should be the same as exposed in compose. The logs should have defined their level. It can be traced, debug, info, warn, or error. For the production environment, the info level should be pretty fine (note that the trace level could generate a large number of logs that should not be exposed in production). I will write the logs to the stdout only.
# composes/caddy-authelia/authelia/configuration.yml
theme: auto
server:
host: 0.0.0.0
port: 9091
path: ""
log:
level: info
TOTP
The Authelia supports utilizing time-based one-time passwords as a 2FA method. I will use it. The default setting here is chosen in the best way, for compatibility. I won’t change here anything except the issuer displayed to the name of my authentication domain.
totp:
issuer: whoareyou.brozen.best
period: 30
skew: 1
Authentication backend
The Authelia uses paired usernames and passwords for first-factor authentication. It can be done via LDAP or file. LDAP is of course the recommended one for security in a production environment. However all of my architecture will have only one user, so it could be overkill and I will use the file. I will also disable the password reset option in the UI because I don’t need this. The path to the config was also mounted in the compose, it must be the same.
authentication_backend:
password_reset:
disable: true
file:
path: /config/users.yml
The users.yml file should at least include the username, hashed password with one of the supported algorithms, email, and the groups that the user belongs to (it can be the empty array also). The example file looks like this:
# composes/caddy-authelia/authelia/users.yml
users:
charizard:
disabled: false
displayname: "Charizard"
# Password is authelia
password: "$6$rounds=50000$BpLnfgDsc2WD8F2q$Zis.ixdg9s/UOJYrs56b5QEZFiZECu0qZVNsIYxBaNJ7ucIL.nlxVCT5tqh8KHG8X4tlwCFm5r6NTOZZ5qRFN/" # yamllint disable-line rule:line-length
email: charizard@brozen.best
groups:
- admins
- dev
Access control
Access control is the primary authorization system. It allows the user to define the rules-based access policies. If someone doesn’t meet the specified conditions for the resource, access cannot be granted. The default policy should be set to deny. The rules have a lot of configuration options. It is possible to break the rule confirmation into the following parts: certain domains, domains matched with regex, resources, subjects, networks, or even HTTP methods. I will only set the rules for certain domains. The authentication service should bypass the rule to allow authentication for users. My other services will use the two_factor policy.
access_control:
default_policy: deny
rules:
- domain: whoareyou.brozen.best
policy: bypass
- domain: example-service.brozen.best
policy: two_factor
subject:
- group: admin
Regulation
The regulation is also an important part. It can help with preventing brute-force attacks. If someone makes too many authentications attempts Authelia can ban the account temporarily. The max_retries option is the number of failed login attempts before a user may be banned. The find_time is the period analyzed for failed attempts. Ban time is the time the user cannot log in again. The options default to 3, 120 seconds, and 300 seconds. I will make it more strict and change it to 2, 60, and 900.
regulation:
max_retries: 2
find_time: 60
ban_time: 900
Session
Here the Redis configured in the previous part comes into play. All of the session and cookie-related things can be configured here. The assigned domain should be the root one or the same as the Authelia is served. Expiration is the time before the cookies and the session is destroyed. Inactivity is the period the user can be inactive until the session is destroyed. The credentials for Redis are passed via docker-compose, so it just needs the container name and port.
session:
name: authelia_session
expiration: 3600
inactivity: 300
domain: brozen.best
redis:
host: authelia-redis
port: 6379
Storage
For the storage, the situation looks similar to the session. Credentials are provided via docker, so just container name and port here.
storage:
postgres:
host: authelia-postgres
port: 5432
Notifier
The Authelia can send messages to users to verify their identity. They can be saved to the file, as you can guess it is an option only for development. The Authelia can send emails to users using an SMTP server. It can be something lightweight and self-hosted like Postfix, any third-party paid tool like Sendgrid, or even Gmail. Here also credentials are passed via docker-compose.
notifier:
smtp:
sender: bulbasaur@brozen.best
host: smtp.gmail.com
port: 587
Caddy
Caddy has also a lot of various deployment methods, for example, it can be deployed officially with static binaries, standard package managers, and Docker and it also has community-maintained methods for tools like Homebrew, Webi, Chocolatey, Ansible, Scoop, and Termux. I wouldn’t say that Caddy is the ‘container-first’ tool like the Authelia, but it is not such a big problem, because the Docker deployment is also well-documented. The whole deployment will be done with the docker-compose bundled with the Authelia deployment, and the web server configuration will be included in the Caddyfile.
Docker-compose
The Caddy service definition in the docker-compose file is very simple and almost directly copied from the documentation. The ports here must be accessible from the host machine, so the required parameter is ports instead of expose. The service is also attached to the previously defined caddy-network.
caddy:
image: caddy:alpine
container_name: caddy
ports:
- 80:80
- 443:443
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- ./data:/data
- ./config:/config
networks:
- caddy-network
restart: unless-stopped
Caddyfile
The Caddyfile is a convenient Caddy configuration format for humans. The basic idea is that you first type the address of your site, then the features or functionality you need your site to have. The integration of The Authelia with Caddy relies on the forward_auth - an opinionated directive that proxies a clone of the request to an authentication gateway, which can decide whether handling should continue, or needs to be sent to a login page. Caddy also by default doesn’t trust any other proxies and removes potentially fabricated headers. It is a standard feature of the proxies with good security practices and it is difficult to configure this incorrectly. If Caddy is not the first server being connected to by the clients, for example when Cloudflare is in front of Caddy, the trusted_proxies list with the trusted CIDRs may be configured, but it’s not my case. The sites which don’t require authentication shouldn’t be configured in the reverse proxy to perform authentication with the Authelia (for example with bypass policy) at all for performance reasons. The example Caddyfile looks like this:
# composes/caddy-authelia/Caddyfile
whoareyou.brozen.best {
reverse_proxy authelia:9091
}
example-service.brozen.best {
forward_auth authelia:9091 {
uri /api/verify?rd=https://whoareyou.brozen.best/
copy_headers Remote-User Remote-Groups Remote-Name Remote-Email
}
reverse_proxy example-service:9000
}
not-auth-service-example.brozen.best {
reverse_proxe not-auth-service-example.brozen.best:9001
}
Deployment
To deploy the bundle with Ansible, the first task is to recursively copy the directory with the proper permissions. Write, read & execute permission for the owner are enough. Execute is here because I would like to have the possibility to enter the directory. The next tasks are Docker related, they can be done using the community modules. The first task creates the attachable caddy-network with the swarm scope. The second one executes the equivalent of docker-compose up -d
command.
# deploy_caddy_authelia.yaml
---
- hosts: data_master_nodes
tasks:
- name: Copy
copy:
src: ./composes/caddy-authelia/
dest: /docker/caddy-authelia/
directory_mode: 0700
- name: Create caddy-network
community.docker.docker_network:
name: caddy-network
scope: swarm
attachable: yes
- name: Deploy caddy-authelia
community.docker.docker_compose:
project_src: /docker/caddy-authelia/
register: output
And let’s run it with:
ansible-playbook deploy_caddy_authelia.yaml -u charizard
To make the served sites accessible, setting the proper firewall rule for HTTP/HTTPS ports is also required. It can be done using the docker-ufw tool installed in the previous post:
ufw-docker allow caddy 443/tcp
ufw-docker allow caddy 80/tcp
Now when I try to access the secured service I am redirected to the authentication page and see the login prompt. After passing the credentials the device should be registered in the authorization app like Google Authenticator using a QR code or token, then the authorization codes can be used to log in.
Conclusion
Setting the authentication and authorization service for my web server was such an educational task for me. I think I acquired more knowledge about networks via reading the Authelia documentation and checking the other resources to understand everything properly. I hope that my stack is now more secure. All of the web services deployed in future posts will have similar to the example entry in the Caddyfile. The first one is the Jupyter service for the Dask computations in the next post. Cya!