Multiple Websites with docker compose

When I set up my SSDNodes VPS, I decided to manage my websites in a docker compose stack, to exercise what I had learned after taking the Docker Certified Associate course on A Cloud Guru.  It didn't take long for me to have three different websites (eldon.me, aprilandtrey.us, and git.eldon.me) running on this VPS.  I had migrated these websites from my old VPS provider running WordPress, and switched to using Ghost with commentoplusplus for my blogs, and Gitea for my Git website.  All of these run behind an nginx reverse proxy.

PREREQUISITES

  • A VPS (Virtual Private Server) with as much virtual CPU, memory (RAM), and hard drive capacity that you can afford.  I chose an SSDNodes 2X-Large Standard VPS, with 8 virtual Intel x86_64 CPUs, 32GiB RAM, and 640GiB disk space (backed by NVMe SSDs).  This is much more than my previous VPS provider (which gave me 2GiB RAM, 1 vCPU, and 20GiB hard drive space).  I did pay for a three year plan, at $255 USD (which shakes out to about $7.08 USD per month).  My previous hosting provider gave me so little resources for about $5 USD per month.  My SSD Node (which I've named deltachunk) is a big improvement in available resources, at negligible increased cost.
  • A Linux based operating system (OS).  SSDNodes gives you several options, including Alma, CentOS, Debian, Rocky, and Ubuntu Linux.  I've converted most of my systems to Arch Linux, and used Debian 10 as a basis on deltachunk.  I even wrote an article on how I repartitioned deltachunk so I could bootstrap Arch Linux from an existing Linux system.  Now it runs pure Arch Linux, with only the original MBR partition table on the KVM/qemu qcow virtual disk.
  • docker and docker-compose from the Arch Linux Community repository.  I then followed the Arch Wiki article for setting up a non-root user to run my Docker containers.  
  • An account on Docker Hub.  This is not absolutely necessary, but my docker-compose stack is comprised of official project images from Docker Hub.  I use the standard x86_64 images, which are usually based on Debian or Ubuntu, rather than any Alpine Linux based Docker images.

Configuration

daemon.json

After installing both docker and docker-compose using pacman (I actually use an AUR helper named pikaur to maintain compatibility with pacman), I configured Docker.  Since the Arch Wiki mentions the preferred place for this is /etc/docker/daemon.json, that's where I put the Docker daemon configuration.

Since I'm using btrfs on deltachunk, I put my Docker subvolume (data-root) at /data/docker, and I chose the btrfs storage driver.  Also, since I'm using systemd on deltachunk, I chose the journald log-driver as my Docker logging driver.  The Arch Wiki article describes many more ways to configure the docker.service systemd unit, but this is all I have for right now.  Here is my full daemon.json:

{
    "data-root": "/data/docker",
    "storage-driver": "btrfs",
    "log-driver": "journald"
}

docker-compose.yml and .env

The heart of my docker-compose stack is my docker-compose.yml ( YAML: YAML Ain't Markup Language) file.  This tells docker compose or docker-compose how to build and start the containers of the stack.  It has many variable dereferences, with the actual values contained in my .env file.  I tried having each container with its own environment file, but I couldn't get that to work.  So the .env contains all of the variables for all of my containers defined in docker-compose.yml.  I operate the stack from ${HOME}/docker, where both docker-compose.yml and .env exist (docker-compose will look in the current directory for these files to know what containers to manage).

At the top of docker-compose.yml, I define the version of this file.  Note that this is the syntax version of docker-compose.yml.  At the time I was initially developing the stack, the latest version was 3.8, so that's what I include at the top of the file:

version: "3.8"

Since this rarely needs to change, I didn't make this  a separate variable in .env.  

Next I defined the networks for my docker-compose stack.  I chose two, deltachunk-int (for the internal network on host deltachunk), and deltachunk-ext (for the external network on host deltachunk).  Next I set the driver-opts (driver options) for  both networks to have them bind to the host's IPv4 loopback address (127.0.0.1).  This is to ensure containers aren't listening on my external WAN interface (which they would be without these options):

networks:
  deltachunk-ext:
    driver_opts:
      com.docker.network.bridge.host_binding_ipv4: "127.0.0.1"
  deltachunk-int:
    driver_opts:
      com.docker.network.bridge.host_binding_ipv4: "127.0.0.1"

Binding these to the host loopback address ensures the containers aren't accidentally accessible from the WAN/Internet.  The only container I want accessible from the Internet is my gitea container (over a non-standard SSH port), and nginx (the web server I'm using as a reverse proxy to all of my docker-hosted application containers).

Services

The rest of my docker-compose.yml file defines several services.  These services can be backed by any number of containers (such as in a Docker Swarm or Kubernetes/K8s cluster).  For my applications, I only have the one container per service.  Since this is a hobby docker-compose stack, this is all I need, and have configured.  I only have one node (this VPS) that I own, but with the low cost of SSDNodes, I'm considering subscribing to more as the budget allows.

Services are defined under the top-level services:directive.  All named services are indented in YAML below this top-level directive.  See the nginx service definition below.  Variables are used for the image versions (rather than setting the version to latest, I manually set the image version in .env so I'm not surprised by a potentially breaking version change), networking ports for various services, as well as the volume (disk) definitions for each service.  Each service has the concept of HOST ports/volumes relative to the deltachunk host, and CONTAINER ports/volumes relative to the containers themselves.

nginx

Below is my nginx service definition:

services:
  nginx:
    image: nginx:${NGINX_VERSION}
    container_name: nginx
    restart: unless-stopped
    networks:
      - deltachunk-ext
      - deltachunk-int
    ports:
      - "${NGINX_HOST_HTTPS_PORT}:${NGINX_CONTAINER_HTTPS_PORT}"
      - "${NGINX_HOST_HTTP_PORT}:${NGINX_CONTAINER_HTTP_PORT}"
    volumes:
      - "${NGINX_HOST_ETC}:${NGINX_CONTAINER_ETC}:ro"
      - "${NGINX_HOST_WEB_ROOT}:${NGINX_CONTAINER_WEB_ROOT}:ro"
      - "${NGINX_HOST_CERTS_DIR}:${NGINX_CONTAINER_CERTS_DIR}:ro"
      - "${NGINX_HOST_LOG_DIR}:${NGINX_CONTAINER_LOG_DIR}"
      - "${NGINX_HOST_CACHE}:${NGINX_CONTAINER_CACHE}"
      - "${NGINX_HOST_RUN}:${NGINX_CONTAINER_RUN}"
    depends_on:
      - gitea
      - aprilandtrey
      - commento_aprilandtrey
      - eldon
      - commento_eldon
    links:
      - gitea
      - aprilandtrey
      - commento_aprilandtrey
      - eldon
      - commento_eldon

This is my most involved service definition.  Since this service is the bridge between the Internet and all of the other services, it is part of both docker-compose networks ( deltachunk-int and deltachunk-ext, the internal and external networks, respectively).  The ${NGINX_HOST_HTTPS_PORT} is set to 0.0.0.0:443so it binds to all addresses of the host, and is listening on host port 443 (so it is reachable from the Internet).  The variable ${NGINX_CONTAINER_HTTPS_PORT} is also set to 443, for the standard HTTPS port.   Likewise, the variable ${NGINX_HOST_HTTP_PORT} is set to 0.0.0.0:80, so it is reachable from the Internet/WAN, and ${NGINX_CONTAINER_HTTP_PORT} is set to 80 for both.  The colon separating the two variable dereferences indicates the mapping between the host binding, and the corresponding container port.  Without this mapping (i.e., without the HOST bindings, and without the second colon :), this would be a container-only port, and anything trying to reach it would need to be on one of the networks ( deltachunk-int or deltachunk-ext).  

Next are the volume definitions.  First, for the nginx container, I created an nginx Btrfs subvolume under the /data top-level subvolume (mounted at /data/nginx).  The volume definitions are in the format host_directory:container_directory:volume_options.  See my .env file, but these are host:container mappings for the nginx cache, etc, log, run, and www directories, respectively.  I also have a mapping for my ACME/Let's Encrypt directory (which is managed from the deltachunk host, not in a container or docker-compose service).  The certbot utility is used with all of the websites served by nginx which uses the ACME protocol with Let's Encrypt to generate and sign one primary TLS certificate for all hosts served by this web service, by way of the Subject Alternative Names (SAN) feature of TLS certificates.  Also see the note about SNI below.

Finally, in docker-compose.yml I define the service depends_on and links.  I set the nginx service to depend on each application service, including my Ghost services (eldon [this website], and aprilandtrey), my Commento-plusplus services (commento_eldon and commento_aprilandtrey, for comments on my personal blogs), and the Gitea service (gitea).  These dependencies ensure nginx won't start unless all of these containers are up and running.  The links section may not be necessary, but it was recommended by the docker-compose documentation, so I included it here.

The last part of setting up nginx is the configuration.  The following directory structure is set at /data/nginx/etc, which maps into the nginx container at /etc/nginx (the default nginx configuration directory):

../etc/
├─mime.types
├─nginx.conf*
├─sites-available/
│ ├─aprilandtrey.us
│ ├─commento.aprilandtrey.us
│ ├─commento.eldon.me
│ ├─eldon.me
│ └─git.eldon.me
└─sites-enabled/
  ├─aprilandtrey.us@ → ../sites-available/aprilandtrey.us
  ├─commento.aprilandtrey.us@ → ../sites-available/commento.aprilandtrey.us
  ├─commento.eldon.me@ → ../sites-available/commento.eldon.me
  ├─eldon.me@ → ../sites-available/eldon.me
  └─git.eldon.me@ → ../sites-available/git.eldon.me

This nginx service is the only web server for this entire docker-compose stack.  It relies heavily upon the Server Name Indication (SNI) feature of TLS to have multiple separate websites served from the same IP address.  I follow the typical Debian setup, with a sites-enabled directory containing symbolic links to the sites-available configuration files.  The sites-enabled directory is then included in the top-level nginx.conf configuration file:

#user www-data;
worker_processes auto;
pid /run/nginx.pid;
#include /etc/nginx/modules-enabled/*.conf;
pcre_jit on;
error_log  /var/log/nginx/error.log  notice;

events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;


    access_log  /var/log/nginx/access.log;
    sendfile        on;
    tcp_nopush     on;
    keepalive_timeout  65;
    gzip  off;
    ssl_ciphers "EECDH+AESGCM:AES256+EECDH";
    ssl_protocols TLSv1.3;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    add_header Strict-Transport-Security "max-age-63072000; includeSubdomains; preload";
    add_header X-Frame-Options DENY;
    add_header X-Content-Options nosniff;
    ssl_session_tickets off;
    server_tokens off;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s; # Google DNS
    resolver_timeout 5s;
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

All paths in this file are for the mount points within the container.  If you wanted separate nginx modules, you could create the etc/modules-available/ and etc/modules-enabled/ directories, with symlinks in modules-enabled to *.conf files in modules-available, similarly to how sites-available and sites-enabled are set up.  I don't use any nginx modules, so that line is commented out.  Again, this is similar to the way Debian sets up nginx, feel free to follow a different format.  The individual site files are explained in each service section below.

gitea

When I was setting up my Gitea instance, I had migrated from a standalone (non-docker container) setup on my previous VPS.  First, I created the git user on my host system (deltachunk), and set the username, UID, and GID to Gitea-related variables in .env.  Next, I copied the Gitea data, git, and gitea directories from the old VPS to /data/gitea/ on the deltachunk host, ensuring they were owned by git:git.  Then I did a dump of the gitea database in MariaDB on the old VPS, placing the gitea-db.sql file in /data/mariadb/init (more on that in the mariadb container below).

Here is the docker-compose.yml section for gitea:

  gitea:
    image: gitea/gitea:${GITEA_VERSION}
    container_name: gitea
    environment:
      - USER=${GITEA_USER}
      - USER_UID=${GITEA_USER_UID}
      - USER_GID=${GITEA_USER_GID}
      - GITEA__database__DB_TYPE=${GITEA_DATABASE_TYPE}
      - GITEA__database__HOST=${GITEA_DATABASE_HOST}
      - GITEA__database__USER=${GITEA_DATABASE_USER}
      - GITEA__database__PASSWD=${GITEA_DATABASE_PASS}
      - GITEA__database__NAME=${GITEA_DATABASE}
    depends_on:
      - mariadb
    restart: unless-stopped
    networks:
      - deltachunk-int
    ports:
      - "${GITEA_CONTAINER_PORT}"
      - "${GITEA_HOST_SSH_PORT}:${GITEA_CONTAINER_SSH_PORT}"
    volumes:
      - "${GITEA_CONTAINER_DATA}/.snapshots/"
      - "${GITEA_HOST_DATA}:${GITEA_CONTAINER_DATA}"
      - "${GITEA_HOST_SSH_DIR}:${GITEA_CONTAINER_SSH_DIR}"
      - "/etc/localtime:/etc/localtime:ro"
    links:
      - mariadb

The only dependency for gitea is on the mariadb service.  It also sets its network to deltachunk-int, since it only should be available on the internal network of Docker.  The ${GITEA_CONTAINER_PORT} is set container-only (without a colon), so it is only available to the internal network (nginx provides the link from the Internet/WAN to Gitea).  The only external port is the SSH_PORT, which allows SSH URLs for git remotes and clones.  The SSH_DIR is the host/container mapping of the standard .ssh/ directory, which contains the git user public/private key pair, and the list of allowed SSH public keys (.ssh/authorized_keys, this is managed within the web interface of Gitea).

The only other thing of note here is the ${GITEA_CONTAINER_DATA}/.snapshots/ volume.  This definition is necessary for Gitea to ignore the /data/gitea/.snapshots/ host directory, which is created by snapper for backup purposes (snapshots are not backups themselves, but I use borg to backup the latest read-only snapshot).  I use this volume definition again in several of my other services, because those services will not start if this container-only definition doesn't exist.  Basically, the service will see the directory with contents that it doesn't understand, and it will fail to start. Setting the container-only definition ensures this directory appears empty, and thus innocuous, within the container.  

Next was the nginx configuration for Gitea  (at git.eldon.me, in /etc/nginx/sites-available/git.eldon.me):

server {
        listen 80;
        listen [::]:80;
        server_name git.eldon.me;
        return 301 https://git.eldon.me$request_uri;

    # Redirect non-https traffic to https
    # if ($scheme != "https") {
    #     return 301 https://$host$request_uri;
    # } # managed by Certbot

}

server {
    listen 443 ssl;
    server_name git.eldon.me;
    ssl_certificate /etc/letsencrypt/live/aprilandtrey.us/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/aprilandtrey.us/privkey.pem; # managed by Certbot

    root /var/www/git.eldon.me/;
    location / {
            client_max_body_size 1024M;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass http://gitea:3000;
            proxy_connect_timeout 600;
            proxy_send_timeout 600;
            proxy_read_timeout 600;
            send_timeout 600;
    }
}

This is pretty straightforward, the main thing is the proxy_pass to the Gitea application running at http://gitea:3000.  Note gitea is the name of the Gitea container.

mariadb

I use MariaDB for the database backing Gitea.  Unfortunately this is one of three different database engines I use in this docker-compose stack.  I had started with Gitea on MariaDB on my old Debian 10 VPS, so I kept MariaDB in this stack.  Later on, I learned that Ghost only supports MySQL, and when I ran into an issue due to a bug in an older version of Ghost, I migrated my Ghost containers to the official MySQL database engine (now owned by Oracle) in an effort to work around that bug.  Commentoplusplus is only compatible with a specific version of PostgreSQL, so that's why I have another service running yet another database engine.  

Here is the docker-compose.yml definition for mariadb:

  mariadb:
    image: mariadb:${MARIADB_VERSION}
    container_name: mariadb
    restart: unless-stopped
    networks:
      - deltachunk-int
    environment:
      - MARIADB_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
    ports:
      - "${MARIADB_CONTAINER_PORT}"
    volumes:
      - "${MARIADB_CONTAINER_DATA_DIR}/.snapshots/"
      - "${MARIADB_HOST_DATA_DIR}:${MARIADB_CONTAINER_DATA_DIR}"
      - "${MARIADB_HOST_LOG_DIR}:${MARIADB_CONTAINER_LOG_DIR}"
      - "${MARIADB_HOST_CONF}:${MARIADB_CONTAINER_CONF}:ro"
      - "${MARIADB_HOST_INIT}:${MARIADB_CONTAINER_INIT}:ro"

It's pretty straightforward.  Again, the only available network for this service is deltachunk-int, so it's only available internally.  Also, I have my container-only .snapshots/ definition so MariaDB doesn't mistake this directory as another database directory.  The interesting thing common to all of my database services is the INIT directory, which contains SQL scripts to initialize the various databases served by the engine.  Essentially, these create empty databases if the DATA directories aren't populated.  Or, in the case of Gitea, load the initial database from the dump from the previous VPS.  These scripts are ignored once the databases are populated, but these INIT SQL scripts allow the databases to start up the first time.

aprilandtrey

This service is for my family website, https://aprilandtrey.us (https://treyandapril.us points to the same website).  It is a Ghost application.  Because of the ease of using Docker containers, I elected to have two separate services in my docker-compose stack, rather than configuring one application for two sites (the former seemed easier to me, I don't know if Ghost supports powering multiple domains in one service/application).

Here is my docker-compose.yml section for aprilandtrey:

  aprilandtrey:
    image: ghost:${GHOST_VERSION}
    container_name: aprilandtrey
    environment:
      database__client: ${APRILANDTREY_GHOST_DATABASE_CLIENT}
      database__connection__host: ${APRILANDTREY_GHOST_DATABASE_HOST}
      database__connection__user: ${APRILANDTREY_GHOST_DATABASE_USER}
      database__connection__password: ${APRILANDTREY_GHOST_DATABASE_PASS}
      database__connection__database: ${APRILANDTREY_GHOST_DATABASE}
      url: ${APRILANDTREY_GHOST_URL}
    restart: unless-stopped
    depends_on:
      - mysql
    networks:
      - deltachunk-int
    ports:
      - "${APRILANDTREY_GHOST_CONTAINER_PORT}"
    links:
      - mysql
    volumes:
      - "${APRILANDTREY_GHOST_HOST_DIR}:${APRILANDTREY_GHOST_CONTAINER_DIR}"

Initially this depended on MariaDB, but due to my attempted workaround for a bug in an older version of Ghost, I migrated it to MySQL.  I essentially moved the aprilandtrey and eldon databases to MySQL, and renamed variables and directories so they would match.  That workaround ultimately did not work, a patched version of Ghost is what really fixed it.  But according to the Ghost forums, MySQL is the only supported database engine so I stuck with that after the migration.

The network, again, is internal-only, and the port is container-only.  The difference here are the environment variables (all set from .env).  Ghost expects these environment variables to be set in docker-compose.yml, so they're set there rather than somewhere else.

mysql

To convert my Ghost applications from MariaDB to MySQL, I stopped both Ghost services (aprilandtrey and eldon).  First, I took database dumps ( mysqldump) of both the aprilandtrey and eldon databases before stopping the mysql service (which was using the MariaDB engine/image at the time).  Then, I renamed the /data/mysql/ Btrfs subvolume (which was actually MariaDB) to /data/mariadb/.  Next, I created the /data/mysql/ subvolume, and recreated the directory structure from /data/mariadb/, including the /conf, /init, and /data subdirectories.  After that, I started the mariadb service, ensuring everything came up under the new name (only Gitea uses MariaDB now).  Once mariadb and mysql were running, I could import both aprilandtrey and eldon database dumps, and mysql was up and ready.

Here is the docker-compose.yml definition for mysql:

  mysql:
    image: mysql:${MYSQLDB_VERSION}
    container_name: mysql
    restart: unless-stopped
    environment:
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
    networks:
      - deltachunk-int
    ports:
      - "${MYSQLDB_CONTAINER_PORT}"
    volumes:
      - "${MYSQLDB_CONTAINER_DATA_DIR}/.snapshots/"
      - "${MYSQLDB_HOST_DATA_DIR}:${MYSQLDB_CONTAINER_DATA_DIR}"
      - "${MYSQLDB_HOST_LOG_DIR}:${MYSQLDB_CONTAINER_LOG_DIR}"
      - "${MYSQLDB_HOST_CONF}:${MYSQLDB_CONTAINER_CONF}:ro"
      - "${MYSQLDB_HOST_INIT}:${MYSQLDB_CONTAINER_INIT}:ro"

It's pretty similar to the mariadb definition.  Before I imported the database dumps for the Ghost services, I needed to ensure the databases were initialized, by way of the SQL scripts in the INIT directories.  These are practically empty, except for creating the database user for managing each database within MySQL.  You can see the redacted contents of aprilandtrey.sql and eldon.sql initialization files on my Gitea instance.

Prior to the migration to MySQL, I did migrate from WordPress to Ghost, following the excellent Ghost guide for doing so.  Essentially, all I had to do was create the empty Ghost applications, install the Ghost plugin in WordPress on my old VPS, and import the ZIP file created from the Ghost plugin for each blog.  The only thing that didn't transfer were the comments, but I was able to extract those and put them into Commentoplusplus.

commento_aprilandtrey

Once I imported my old WordPress data into Ghost, I needed to set up the comment functionality (Ghost did not contain one by itself when I was setting this up;  I learned today that Ghost now has this functionality natively).  After reviewing several options, I decided to use Commento, and used a script (wordpress-2-commento) to convert my WordPress comments to Commento.  I did have to tweak the Commento database records to make some of the old comments show up, as the script didn't put them in the right place, but once I did that the old comments displayed on my April and Trey blog.  After some time, I found that Commento was not really maintained, and someone made a fork naming it Commento-plusplus.  Migrating to Commento-plusplus was as easy as changing the image definition in docker-compose.yml, and my comments now run with this third-party plugin.  The Commentoplusplus instance for this website is managed at https://commento.aprilandtrey.us.

Here is my docker-compose.yml entry for commento_aprilandtrey:

  commento_aprilandtrey:
    image:  caroga/commentoplusplus:${COMMENTO_VERSION}
    container_name:  commento_aprilandtrey
    restart:  unless-stopped
    ports:
      - "${APRILANDTREY_COMMENTO_CONTAINER_PORT}"
    environment:
      COMMENTO_ORIGIN: "${APRILANDTREY_COMMENTO_ORIGIN}"
      COMMENTO_PORT:  "${APRILANDTREY_COMMENTO_CONTAINER_PORT}"
      COMMENTO_POSTGRES:  "${APRILANDTREY_COMMENTO_URL}"
      COMMENTO_SMTP_HOST: "${GMAIL_HOST}"
      COMMENTO_SMTP_USERNAME: "${GMAIL_USER}"
      COMMENTO_SMTP_PORT: "${GMAIL_PORT}"
      COMMENTO_SMTP_PASSWORD: "${GMAIL_PASS}"
      COMMENTO_SMTP_FROM_ADDRESS:  "noreply@aprilandtrey.us"
      COMMENTO_FORBID_NEW_OWNERS: "true"
      COMMENTO_AKISMET_KEY: "${COMMENTO_AKISMET_API_KEY}"
    networks:
      - deltachunk-int
    depends_on:
      - postgresdb

This service doesn't have any volumes defined, it stores everything in the PostgreSQL database.  Thus, its only dependency is postgresdb.  The connection to the database is defined by the COMMENTO_POSTGRES environment variable; the ${APRILANDTREY_COMMENTO_URL} from .env contains the complete database connection URL.  The other critical piece is the COMMENTO_FORBID_NEW_OWNERS environment variable, this needs to be unset (or set to false) when setting up Commentoplusplus for the first time (before importing the comments from WordPress).  After Commentoplusplus is setup for a website, you can change COMMENTO_FORBID_NEW_OWNERS to true, like I have above.

Finally, I set up the nginx configuration, commento.aprilandtrey.us:

server {
        listen 80;
        listen [::]:80;
        server_name commento.aprilandtrey.us commento.treyandapril.us;
        return 301 https://commento.aprilandtrey.us$request_uri;

    # Redirect non-https traffic to https
    # if ($scheme != "https") {
    #     return 301 https://$host$request_uri;
    # } # managed by Certbot

}

server {
    listen 443 ssl;
    server_name commento.aprilandtrey.us commento.treyandapril.us;
    ssl_certificate /etc/letsencrypt/live/aprilandtrey.us/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/aprilandtrey.us/privkey.pem; # managed by Certbot

    root /var/www/aprilandtrey.us/;
    location / {
            client_max_body_size 364M;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass http://commento_aprilandtrey:8080;
            proxy_connect_timeout 600;
            proxy_send_timeout 600;
            proxy_read_timeout 600;
            send_timeout 600;
    }
}

postgresdb

The docker-compose.yml section for the PostgreSQL database looks like this:

  postgresdb:
    image: postgres:${POSTGRES_VERSION}
    container_name:  postgresdb
    restart: unless-stopped
    shm_size: "256MB"
    networks:
      - deltachunk-int
    environment:
      POSTGRES_PASSWORD: "${POSTGRES_PASS}"
      POSTGRES_USER: "${POSTGRES_USER}"
      PGDATA: "${POSTGRES_CONTAINER_DIR}"
    ports:
      - "${POSTGRES_CONTAINER_PORT}"
    volumes:
      - "${POSTGRES_HOST_DIR}:${POSTGRES_CONTAINER_DIR}"
      - "${POSTGRES_HOST_INIT}:${POSTGRES_CONTAINER_INIT}:ro"
      - "${POSTGRES_HOST_ETC}:${POSTGRES_CONTAINER_ETC}:ro"
      - "${POSTGRES_PASSWD}:${POSTGRES_CONTAINER_PASSWD}:ro"

To initialize the PostgreSQL database, I created aprilandtrey.sql in the INIT directory, so it could be initialized for use by the commento_aprilandtrey service.

Remaining services

The eldon service is my other Ghost instance, also backed by the mysql service, with commento_eldon for its comments.  Again, it is served to the Internet/WAN by the nginx service.  These are nearly identical to the aprilandtrey related services, only the actual container name, subvolumes, variables, and content differ.

For completeness, here is the eldon service definition in docker-compose.xml:

  eldon:
    image: ghost:${GHOST_VERSION}
    container_name: eldon
    environment:
      database__client: ${ELDON_GHOST_DATABASE_CLIENT}
      database__connection__host: ${ELDON_GHOST_DATABASE_HOST}
      database__connection__user: ${ELDON_GHOST_DATABASE_USER}
      database__connection__password: ${ELDON_GHOST_DATABASE_PASS}
      database__connection__database: ${ELDON_GHOST_DATABASE}
      url: ${ELDON_GHOST_URL}
    restart: unless-stopped
    depends_on:
      - mysql
    networks:
      - deltachunk-int
    ports:
      - "${ELDON_GHOST_CONTAINER_PORT}"
    links:
      - mysql
    volumes:
      - "${ELDON_GHOST_CONTAINER_DIR}/.snapshots/"
      - "${ELDON_GHOST_HOST_DIR}:${ELDON_GHOST_CONTAINER_DIR}"

This is the nginx configuration, /etc/nginx/sites-available/eldon.me, with the corresponding symbolic link in /etc/nginx/sites-enabled/:

server {
        listen 80;
        listen [::]:80;
        server_name eldon.me www.eldon.me;
        return 301 https://eldon.me$request_uri;

    # Redirect non-https traffic to https
    # if ($scheme != "https") {
    #     return 301 https://$host$request_uri;
    # } # managed by Certbot

}

server {
    listen 443 ssl;
    server_name eldon.me www.eldon.me;
    ssl_certificate /etc/letsencrypt/live/aprilandtrey.us/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/aprilandtrey.us/privkey.pem; # managed by Certbot

    root /var/www/eldon.me/;
    location / {
            client_max_body_size 364M;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_pass http://eldon:2368;
            proxy_connect_timeout 600;
            proxy_send_timeout 600;
            proxy_read_timeout 600;
            send_timeout 600;
    }
} 

Note that Commentoplusplus can operate on multiple domains, I didn't need to create a separate instance for https://eldon.me.  I found it easier to have this blog use its own Commentoplusplus container, but that doesn't need to be the case.  Here is the docker-compose.yml definition for commento_eldon:

  commento_eldon:
    image:  caroga/commentoplusplus:${COMMENTO_VERSION}
    container_name:  commento_eldon
    restart:  unless-stopped
    ports:
      - "${ELDON_COMMENTO_CONTAINER_PORT}"
    environment:
      COMMENTO_ORIGIN: "${ELDON_COMMENTO_ORIGIN}"
      COMMENTO_PORT:  "${ELDON_COMMENTO_CONTAINER_PORT}"
      COMMENTO_POSTGRES:  "${ELDON_COMMENTO_URL}"
      COMMENTO_SMTP_HOST: "${GMAIL_HOST}"
      COMMENTO_SMTP_USERNAME: "${GMAIL_USER}"
      COMMENTO_SMTP_PORT: "${GMAIL_PORT}"
      COMMENTO_SMTP_PASSWORD: "${GMAIL_PASS}"
      COMMENTO_SMTP_FROM_ADDRESS:  "noreply@eldon.me"
      COMMENTO_FORBID_NEW_OWNERS: "true"
      COMMENTO_AKISMET_KEY: "${COMMENTO_AKISMET_API_KEY}"
    networks:
      - deltachunk-int
    depends_on:
      - postgresdb

Running the Stack

Now, with all of the services defined, databases created and populated, ports and networks set up, I was able to start the stack.  In truth, I had set these up separately, and ran the docker-compose up -d command several times as I slowly developed the stack.  With my regular user added to the docker group, I don't need to be root to launch the stack.  Both docker-compose.yml and .env are in my ${HOME}/docker subdirectory.  With my current working directory (${PWD}) in this directory, I can execute the following command:

docker-compose up -d

The -d option tells docker to run these containers detached, which is another term for in the background.  When bringing these containers up for the first time, it will download the images from Docker Hub, and then start the container.  Once docker-compose up -d finishes, it will return control to my running shell (zsh).  Before I do anything else, I feel it is best practice to run the following command:

watch -n1 'docker-compose ps'

Then, watch this for 15-30 seconds.  Any containers which are in a restart loop will flip from running to stopped, starting, started, and will not be steady at running if the containers are in a crash loop.  Here is an example of what the output looks like:

Every 1.0s: docker-compose ps                                                                                              deltachunk: Mon Nov 28 13:29:57 2022

NAME                    COMMAND                  SERVICE                 STATUS              PORTS
aprilandtrey            "docker-entrypoint.s…"   aprilandtrey            running             127.0.0.1:49156->2368/tcp
commento_aprilandtrey   "/commento/commento"     commento_aprilandtrey   running             127.0.0.1:49155->8080/tcp
commento_eldon          "/commento/commento"     commento_eldon          running             127.0.0.1:49157->8080/tcp
eldon                   "docker-entrypoint.s…"   eldon                   running             127.0.0.1:49158->2368/tcp
gitea                   "/usr/bin/entrypoint…"   gitea                   running             0.0.0.0:redacted->22/tcp, 127.0.0.1:49153->3000/tcp
mariadb                 "docker-entrypoint.s…"   mariadb                 running             127.0.0.1:49159->3306/tcp
mysql                   "docker-entrypoint.s…"   mysql                   running             33060/tcp, 127.0.0.1:49154->3306/tcp
nginx                   "/docker-entrypoint.…"   nginx                   running             0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
postgresdb              "docker-entrypoint.s…"   postgresdb              running             127.0.0.1:49160->5432/tcp

The STATUS column for each service should stay steady at running if all is well.  If not, it's time to check the logs of the malfunctioning service, see below.

Troubleshooting

If any service fails to start, or appears to be continuously restarting (see the watch command above), you can check the logs of the service to determine the cause:

docker-compose logs aprilandtrey
# OR #
docker-compose logs -f aprilandtrey

The -f option (long form:  --folow), can be used to tail the logs, so you can get a complete picture of what is going on with the service as it restarts.  Common reasons for this could include any new version just deployed contains a bug, or in the case of a database service some table or column had been using a deprecated type or length, etc.  Much of the troubleshooting depends on the underlying application for the service (in my case, Ghost, Gitea, or Commentoplusplus, as well as the nginx web server, or the three database engines:  MariaDB, MySQL, PostgreSQL).  

Upgrading

Over time, the applications backing this docker-compose stack will need to be updated, in order to provide new features, fix bugs (including serious security vulnerabilities), etc.  The way I've structured docker-compose.yml and .env make it easy to upgrade.  Once I discover a new version for any of the applications comprising this stack, a simple edit of the version at the top of .env is all that is necessary, then a docker-compose up -d command will pull in any new images, and restart relevant containers, while keeping any other services and their containers running.  Again, Docker Hub has the latest tag for image releases, but I don't use it to avoid any unexpected, breaking upgrades to my services.  The dependencies will also be restarted.  This is important for nginx, it needs to be restarted whenever aprliandtrey, eldon, or gitea are upgraded and restarted.  Failing to restart nginx tends to show weird problems, like the eldon.me content showing when navigating to https://aprilandtrey.us, etc.

Right now I'm tracking the newest stable Ghost release, major version 5.  It sees the most activity, on some weeks it can see more than three new minor versions (e.g., 5.21, 5.22 or 5.23) a week, with many patch versions (like 5.22.1, 5.23.0, 5.24.2) as bugs in new releases are identified and patched.  Especially with Ghost, versions will be released before the Docker Hub has the images built and ready for download.  Sometimes it can take several days for the Ghost images to be released on Docker Hub, but at other times they can appear quite quickly.   The Docker Hub FAQ tries to explain why this is.  Mainly it has to do with donated image build resources (i.e., hardware), and since I run these as x86_64 containers (the most common type of Docker images), the build queue can get quite long if there are a lot of images that need to be built.

Because Ghost is updated so frequently, my plan is to track major version 5, and stick with it once major version 6 is released (I have no idea when that will happen).  Hopefully Ghost versions will be more stable once active feature development moves to major version 6.  I do get notified of new Ghost version 4 updates, but they are much less frequent and manageable.  I expect Ghost 5 to similarly stabilize once Ghost 6 is released.

Performance

As I count it, I have eight different services running in this docker-compose stack.  Fortunately, my blogs and Git instance don't see a whole lot of traffic (I'm not in it for that), so even with eight services system resource usage is really small.  I'm only using 9 percent of available RAM (out of 32GiB), and 3 percent swap (out of 64GiB), with only 2 percent virtual CPU usage (load average of 0.14, 0.12, 0.15).  

Conclusion

That's about it for my Docker Compose stack!  It has been humming along quite nicely, and updating the image versions is pretty quick and relatively effortless.  I've only ever had my Ghost services in a crash loop once, and that ultimately was due to a bug introduced in that specific version of Ghost 4.  A patch release shortly later fixed it, but in the meanwhile I migrated those services to a supported database engine (MySQL).  

Next Steps

  • After my discovery that Ghost now has its own comment system, I need to migrate away from Commentoplusplus.  Like the Commento project it was forked from, the Commentoplusplus development appears to have stalled (it has not seen an update in four months as of this writing).  Also, its dependence on PostgreSQL is unfortunate, as it's a database engine that I'm least familiar with, and the version its on is quite old (13.7).  While PostgreSQL 14 is available, Commentoplusplus is NOT compatible with it.  I will need to see how to migrate from Commento to Ghost comments.  When switching from WordPress comments to Commento I found the wordpress-2-commento script.  Since Commentoplusplus isn't all that popular (I never liked the way it looked at the bottom of my blog pages), I may have to get a deep understanding of how Commentoplusplus structures its database, and massage that into the corresponding Ghost comment system.  This project may take quite some time, but I will be able to eliminate three services (commento_aprilandtrey, commento_eldon, and postgresdb) if I am successful.
  • The whole point of this article was for me to review how my docker-compose stack was set up.  It took me several months to develop, but I hadn't really reviewed it in the several months since.  I needed to become familiar with it so I could begin learning how to add the mailcow dockerized services to this stack.  Mailcow dockerized is a quick way to stand up a complete email service, with spam protection, and all the modern security features you'd expect from such a service.  However, mailcow expects it to stand up its own docker-compose stack, including nginx.  Rather than springing for a separate SSDNode VPS to stand this up, I want to integrate it with my existing stack.  That will be another long and arduous process, something I imagine is not quite supported by the mailcow community.

Please leave any comments below!