Technology

My No‑Drama Playbook: Hosting WordPress on a VPS with Docker, Nginx, MariaDB, Redis, and Let’s Encrypt (with docker‑compose + Persistent Volumes)

So there I was again, staring at a sluggish WordPress site on a tiny VPS, thinking, there has to be a cleaner way to run this without it turning into a weekend project every time something needs updating. I’d done the classic LAMP installs, the one‑off tweaks, the frantic plugin audits after a spike in CPU. Fun in a nostalgic way, sure, but what I really wanted was a setup I could bring up from scratch in minutes, keep tidy with versioned configs, and scale up without changing the whole machine. That’s when Docker + docker‑compose became my quiet sidekick.

Ever had that moment when updates feel risky? Like, “If I change this one PHP setting, will I break the site?” Or, “If I move servers, what gets left behind?” The thing I love about containerizing WordPress is that it gives you a simple promise: bake your logic into compose files, keep your data on persistent volumes, and keep Nginx, MariaDB, and Redis in their lanes. When you do that, patches feel routine, migrations are less scary, and SSL renewals stop waking you up at 2 a.m.

In this post, I’ll walk you through the whole playbook: the why behind this architecture, how I lay out docker‑compose with Nginx, PHP‑FPM, MariaDB, Redis, and Let’s Encrypt, how I keep data safe with persistent volumes, and a few performance and security touches that reduce drama. Think of it like us sitting down with coffee, opening a terminal, and building something solid together.

Why Docker on a VPS for WordPress?

I’ll start with a quick story. One of my clients had a classic “pets, not cattle” VPS: everything hand‑tuned over two years, and nobody dared touch PHP or Nginx configs because the last time someone did, the homepage 500’d. When we moved them to Docker with compose, the first big win was psychological: configs lived in version control, the runtime was predictable, and every container had one job. If we changed PHP settings, we changed the PHP‑FPM container. If we tightened Nginx, we touched only Nginx. That clarity matters.

There are a few quiet advantages here. First, portability: if you ever switch VPS providers, you copy your compose files and your volumes and you’re home by dinner. Second, separation of concerns: WordPress code stays with PHP‑FPM, database state stays in MariaDB’s volume, cache logic stays in Redis, and TLS material stays where Nginx expects it. Third, speed of recovery: want to test a new Nginx config? Spin up a staging stack on a different port in seconds without colliding with the main site.

Could you do this in a traditional setup? Of course. But here’s the thing: containers bake repeatability into your day‑to‑day. Changes become explicit. Backups become a policy, not a hope. And when you use persistent volumes well, the “stateless vs. stateful” boundary lines up with how you think about risk.

The little architecture that could: Nginx, PHP‑FPM, MariaDB, Redis

Let’s get the lay of the land. Nginx is our front door and traffic cop. It terminates TLS, serves static files, and forwards PHP requests upstream to the WordPress container running PHP‑FPM. MariaDB stores your posts, pages, settings—basically the soul of your site. Redis keeps hot objects in memory so WordPress doesn’t hit the database on every page load. And Let’s Encrypt gives us free SSL, renewed automatically, because no one wants to be that person with the expired cert banner.

Think of it like a cozy café. Nginx is the barista, greeting requests and directing them quickly. PHP‑FPM is the kitchen where WordPress lives, assembling pages on demand. MariaDB is the pantry with everything stored neatly. Redis is the heat lamp—keeping frequently requested bits warm so they’re ready fast. And Let’s Encrypt is the lock and alarm on the front door. When the roles are this clear, debugging is less of a guessing game.

In my experience, the biggest trap is letting logs and config sprawl across the host. With compose, we’ll keep configs in an easy folder structure, mount them into containers, and put data in named volumes. That way, your repo tracks “how it runs,” and your volumes store “what it knows.”

Prereqs and quick prep on the VPS

Domain, DNS, and ports

Before you touch Docker, make sure your domain’s A (and AAAA if you’re using IPv6) records point to the VPS. Only ports 80 and 443 need to be publicly exposed. Everything else can live on Docker’s internal network. If you use a cloud firewall or ufw, allow those two and keep SSH locked down.

Install Docker and docker‑compose plugin

Most modern distros have up‑to‑date packages, or you can use Docker’s official repos. If you’re new to compose, the official docker‑compose docs are tidy and worth a skim.

# Ubuntu example
sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

echo 
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] 
  https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | 
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

sudo usermod -aG docker $USER
# Re-login or `newgrp docker` to apply

Folder layout and docker‑compose.yml

I like a simple working directory so future me can remember what I did. Something like this:

~/wp-stack/
  docker-compose.yml
  nginx/
    conf.d/
      site.conf
  data/  # this is just a home for named volumes metadata if you ever bind-mount

We’ll use named volumes for persistence so Docker manages the storage paths. If you prefer bind mounts for backups, that’s fine too, but named volumes are tidy and let you move hosts without caring where the files live on disk.

docker‑compose.yml

Here’s a clean starting point. Replace example.com with your domain and set real passwords.

version: '3.9'

services:
  nginx:
    image: nginx:alpine
    depends_on:
      - wordpress
    ports:
      - '80:80'
      - '443:443'
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - wp_data:/var/www/html
      - letsencrypt:/etc/letsencrypt
      - acme:/var/lib/letsencrypt
      - nginx_logs:/var/log/nginx
    networks:
      - web

  wordpress:
    image: wordpress:php8.2-fpm
    environment:
      - WORDPRESS_DB_HOST=db:3306
      - WORDPRESS_DB_NAME=wordpress
      - WORDPRESS_DB_USER=wpuser
      - WORDPRESS_DB_PASSWORD=supersecret
      - WP_REDIS_HOST=redis
      - WP_REDIS_PORT=6379
    volumes:
      - wp_data:/var/www/html
    depends_on:
      - db
    networks:
      - web

  db:
    image: mariadb:10.11
    environment:
      - MARIADB_DATABASE=wordpress
      - MARIADB_USER=wpuser
      - MARIADB_PASSWORD=supersecret
      - MARIADB_ROOT_PASSWORD=evenmoresecret
    command: ['mysqld', '--innodb-file-per-table=1', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
    volumes:
      - db_data:/var/lib/mysql
    networks:
      - web

  redis:
    image: redis:alpine
    command: ['redis-server', '--appendonly', 'yes']
    volumes:
      - redis_data:/data
    networks:
      - web

  certbot:
    image: certbot/certbot
    volumes:
      - wp_data:/var/www/html
      - letsencrypt:/etc/letsencrypt
      - acme:/var/lib/letsencrypt
    entrypoint: /bin/sh
    networks:
      - web

networks:
  web:
    driver: bridge

volumes:
  wp_data:
  db_data:
  redis_data:
  letsencrypt:
  acme:
  nginx_logs:

A couple of quiet but important details: the wordpress service uses the PHP‑FPM variant of the WordPress image, not Apache. Nginx will be our web server, and it will pass PHP to wordpress:9000. Redis persists data to disk so you keep your cache across restarts. MariaDB gets a named volume so your data lives beyond container lifetimes. And logs have a dedicated mount so you can rotate them on the host if you prefer.

Want to skim the official image docs later? The WordPress Docker image page is a nice reference.

Nginx config and first run

Nginx server block

Let’s drop a sane Nginx config into nginx/conf.d/site.conf. We’ll serve HTTP first so we can issue our initial Let’s Encrypt cert via webroot. Then we’ll flip the switch to 443 with TLS.

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    root /var/www/html;
    index index.php index.html index.htm;

    # ACME challenge for Let's Encrypt
    location ^~ /.well-known/acme-challenge/ {
        allow all;
        default_type 'text/plain';
    }

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ .php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass wordpress:9000;
        fastcgi_read_timeout 60s;
    }

    # Basic security hardening
    client_max_body_size 64m;
    add_header X-Frame-Options SAMEORIGIN always;
    add_header X-Content-Type-Options nosniff always;
    add_header Referrer-Policy no-referrer-when-downgrade always;
}

Bring the stack up so Nginx can serve the ACME challenge over HTTP:

docker compose up -d

Issue the first TLS cert

With DNS pointing to the VPS and port 80 open, we’ll generate certs using the certbot container and the webroot method. Replace emails and domains first.

docker compose run --rm certbot 
  certbot certonly --webroot 
  -w /var/www/html 
  -d example.com -d www.example.com 
  --email [email protected] --agree-tos --no-eff-email

If that succeeds, your certs land in the letsencrypt volume. Now we add the TLS server block and redirect HTTP to HTTPS. Update nginx/conf.d/site.conf to something like this:

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    root /var/www/html;
    index index.php index.html index.htm;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # ACME challenge even on HTTPS (renewals might hit both)
    location ^~ /.well-known/acme-challenge/ {
        allow all;
        default_type 'text/plain';
    }

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~* .(?:css|js|jpg|jpeg|gif|png|svg|ico|webp|avif)$ {
        expires 7d;
        add_header Cache-Control "public, max-age=604800, immutable";
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ .php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_pass wordpress:9000;
        fastcgi_read_timeout 60s;
    }

    # A few helpful headers
    add_header X-Frame-Options SAMEORIGIN always;
    add_header X-Content-Type-Options nosniff always;
    add_header Referrer-Policy no-referrer-when-downgrade always;
}

Reload Nginx by recreating the container:

docker compose up -d nginx

For automated renewals, a simple monthly (or more frequent) cron hit works. The certbot container shares the Let’s Encrypt volumes, so renewals write to the same place. The webroot must still be reachable over HTTP/HTTPS during renewal.

# Crontab example: run daily at 3:17
17 3 * * * docker compose run --rm certbot certbot renew --quiet && docker compose exec -T nginx nginx -s reload

If you want a friendlier walkthrough of Certbot options, the official Certbot site is a nice refresher.

Persistent volumes: where your data really lives

Here’s where the calm comes from. WordPress core, themes, and plugins live in wp_data. Your database state is in db_data. Redis keeps its cache in redis_data. Certificates live in letsencrypt and acme. If you ever migrate hosts, you can snapshot those volumes and move them along with your compose files, and it all snaps back into place like Lego.

I remember a weekend when a client wanted to move from a bursty VPS to a quieter one with more predictable CPU. We tarred up the named volumes, copied them to the new server, brought up the exact same docker‑compose.yml, updated DNS, and that was that. No hand‑installing PHP extensions, no guessing which ini file had the upload limit. Everything lived next to the repo with the exact config we’d been running.

Backups you actually trust

There are many ways to do backups here. My rule: separate concerns. Snapshot db_data frequently (dump logical backups too), archive wp_data regularly (especially if you accept uploads), and keep your Let’s Encrypt material safe. If you prefer object storage, I’ve had a great time using restic to push encrypted snapshots offsite. I wrote a friendly, step‑by‑step deep dive on that if you want a blueprint: Offsite Backups Without the Drama with Restic/Borg to S3‑Compatible Storage.

For quick, pragmatic backups, a pair of commands goes a long way. Dump the database and archive wp_data. You can wrap this into a cron or GitHub Action if your host allows it.

# DB dump
docker compose exec -T db mariadb-dump -u wpuser -psupersecret --databases wordpress | gzip > /backups/wp-$(date +%F).sql.gz

# WordPress files (themes, plugins, uploads)
docker run --rm 
  -v $(docker volume inspect --format '{{.Mountpoint}}' $(basename $(pwd))_wp_data):/src 
  -v /backups:/dest 
  alpine sh -c "cd /src && tar czf /dest/wp-files-$(date +%F).tgz ."

If you use bind mounts instead of named volumes, backups are even simpler from the host, but you trade a bit of portability. Pick what matches your team’s habits.

Redis Object Cache and performance niceties

Turning on Redis object caching is a tiny change that pays off immediately on dynamic sites. Inside WordPress, you can install the “Redis Object Cache” plugin, point it at redis:6379, and click enable. Because we already set WP_REDIS_HOST and WP_REDIS_PORT in the environment, it usually Just Works once the plugin is active. I remember flipping this on for a busy WooCommerce shop and watching queries per page drop dramatically, which also calmed down CPU spikes during promos.

Beyond Redis, a few under‑the‑radar wins add up. First, let Nginx set far‑future caching headers for static assets like CSS, JS, and images—your users’ browsers won’t ask for them every time. Second, keep PHP‑FPM’s worker counts sensible; the defaults are fine on small boxes, but if you have more cores, you can nudge pm.max_children and friends via php.ini to match your traffic. Third, be mindful of heavyweight plugins or page builders; the containers won’t save you from inefficient code, but they make it easier to isolate and measure.

If you’re into the nitty‑gritty of HTTP speedups later, once your stack is stable you can explore enabling HTTP/2 and even HTTP/3/QUIC on Nginx and your CDN. That’s a story for another day, but keep it in the back of your mind as a tasteful final polish when the basics are steady.

Uploads, timeouts, and other quality‑of‑life tweaks

In Nginx, we bumped client_max_body_size to 64m as a practical default. Adjust as needed for big uploads. For media heavy sites, consider offloading large media to object storage to keep your VPS snappy; it also lightens backups and reduces disk pressure. On the PHP side, adjust post_max_size and upload_max_filesize via a custom ini if you hit limits. Because this is Docker, you can mount a php.ini into the WordPress container and keep those changes versioned along with your compose file.

Security hygiene without drama

Security is mostly a game of guardrails you barely notice day‑to‑day. Keep SSH locked down to keys, let only ports 80 and 443 through, and patch your base images when you bump versions in compose. If you ever expose phpMyAdmin or similar, put it behind authentication or a VPN. Also, don’t run everything as root on the host—Docker will keep most of your surface area inside containers, which helps a lot.

For TLS, we already have Let’s Encrypt running via webroot. You can tighten ciphers and enable HSTS after you confirm everything is working. The important part is making renewal automatic, and having Nginx reload quietly when certs change. If you ever change domains or add subdomains, just re‑issue with certbot and reload.

I’ve seen people try to “bake certs” into images—resist that urge. Certificates are secrets and should live on volumes, not inside images, for both security and flexibility.

Updates without breaking your weekend

The calm way to update? Bump your image tags one component at a time, recreate the service, and watch logs. Start with non‑DB components like nginx and wordpress. Then, when you’re ready, schedule a MariaDB major version bump and take a fresh dump first. If something misbehaves, you can roll the tag back and re‑up in under a minute.

WordPress core and plugins still update inside wp_data as usual. Because that volume persists, you don’t lose your changes when containers restart. When you need to test a new theme or plugin stack, clone the directory, run a staging compose on an alternate port, and point a staging subdomain there. Pressure test, then roll changes to production without surprises.

Monitoring and troubleshooting like a calm pro

When something feels off, start with container logs. Nginx logs land in the nginx_logs volume. PHP‑FPM and WordPress logs will appear in docker logs for the wordpress service unless you change it. MariaDB logs often tell you if a slow query is dragging a page down. Redis logs are usually quiet unless memory is tight.

In a pinch, I attach a shell to the WordPress container and run wp‑cli to inspect the site, clear caches, or nudge a plugin. If you don’t have wp‑cli, a quick docker exec with curl against the localhost endpoint can show you whether the app is responsive without going through the network edge.

If you’re the dashboard type, it’s easy to glue on basic uptime and system metrics with lightweight tools later. For now, keep an eye on CPU, RAM, and disk, and watch your error logs after changes. The best troubleshooting tip I can give you is to change one thing at a time and keep your configs in git so you can diff what changed when.

Extra niceties I reach for

Once the foundation is solid, a couple of tasteful additions make life even easier. A staging compose file with a different project name and ports is great for testing PHP upgrades. A periodic database dump job inside a tiny alpine container keeps backups fresh even if you forget. And when traffic grows, moving MariaDB to its own VPS with the same volume strategy is straightforward—you just point WORDPRESS_DB_HOST to the new endpoint.

Another quiet win: set up proper asset optimization and a CDN when the time is right. Keep your origin (this VPS) lean, and let the edge serve the heavy static bits. Your Redis cache will thank you, and your visitors on slow networks will feel the difference.

Step‑by‑step first install recap

Let me stitch it all together in a calm flow you can follow without second‑guessing yourself. First, point DNS to the VPS, open ports 80 and 443, and install Docker with the compose plugin. Second, create the working directory, drop in the docker‑compose.yml and the Nginx site.conf with a plain HTTP server block. Third, bring the stack up and confirm you can load the domain over HTTP. Fourth, run the certbot container with the webroot method to issue your certs. Fifth, switch Nginx to the HTTPS server block with http2, set caching headers, and reload. Sixth, log into WordPress at /wp‑admin and install the Redis Object Cache plugin; enable it and watch query counts drop. Seventh, schedule cert renewals and back up db_data and wp_data on a cadence that matches your content changes.

And finally, take a breath. The hard part is behind you. From here on, updates are just a tag change and a compose up away.

Troubleshooting the top three gotchas

1) Certbot can’t validate the domain

Nine times out of ten, DNS wasn’t updated or a proxy is in front rewriting paths. Hit http://yourdomain/.well-known/acme-challenge/test.txt and confirm you can serve a file from /var/www/html/.well-known/acme-challenge. If you can’t, double‑check the Nginx location block and that port 80 is open.

2) WordPress installer can’t connect to the database

Make sure WORDPRESS_DB_HOST points to db:3306 and that the MariaDB variables match exactly in both services. If you changed the database name mid‑flight, the installer will complain until you align everything or create the database manually.

3) Redis plugin says “object cache not running”

Confirm the plugin is installed and enabled, and that WP_REDIS_HOST is redis in the wordpress service. If you attached Redis late, try flushing the cache from the plugin and watch docker logs for the redis service for any permission or memory issues.

Wrap‑up: a calm stack you can trust

I’ve been down just about every WordPress hosting path you can imagine, and this one keeps me sane. Docker and compose make the moving parts explicit, Nginx and PHP‑FPM do their jobs without drama, MariaDB sits on a durable volume, Redis keeps things snappy, and Let’s Encrypt removes a whole category of operational anxiety. The best part is how portable it feels—you’re not married to a particular VPS forever, and migrations feel like a checklist, not a cliff.

If you only remember three things, let it be this: keep configs versioned, keep state on persistent volumes, and make backups a habit rather than a hope. When you get the basics right, optimization becomes fun rather than urgent.

Hope this walkthrough helped you build something solid. If you try this stack and run into an edge case I didn’t cover, drop a note and I’ll happily add a section in a future update. Until then, enjoy the quiet joy of a WordPress that just… runs.

Frequently Asked Questions

Great question! For light traffic, 1 vCPU and 1–2 GB of RAM runs fine with Redis enabled. If you’re running WooCommerce, bump to 2 vCPU and 4 GB for breathing room. The nice thing is you can resize the VPS later; Docker doesn’t care, and your volumes stay intact.

Absolutely. You can either run multiple wordpress services with separate db and volumes, or spin a second docker-compose project with a different project name. Update Nginx with multiple server blocks, issue certs for each domain, and keep each site’s wp_data and db_data in separate volumes to avoid mixing content.

Here’s the calm flow: take a fresh DB dump, then update one component at a time. Start by bumping the wordpress image tag (PHP‑FPM comes with it), bring it up, and check the site and logs. Then update Nginx. Finally, plan and test a MariaDB version bump with a backup in hand. If anything feels off, roll back the tag and retry later.