Technology

Containerizing WordPress on One VPS: My Production Docker Playbook with Traefik or Nginx

So there I was, staring at a client’s WooCommerce store late on a Friday evening, feeling that familiar hum of anticipation and mild dread. A flash sale was about to go live, and their site was still on a tiny shared host. You can probably guess what happened the last time: the homepage took forever, the checkout hiccuped, and suddenly my phone was buzzing like a beehive. That’s when I decided to move them to a single VPS and containerize everything. Not to be flashy—just to finally get some control. Ever had that moment when you know rebuilding is less scary than babysitting a fragile setup? That was me.

In this guide, I’ll walk you through how I containerize WordPress with Docker on a single VPS and wire it up behind a reverse proxy—either Traefik for easy, automated TLS and routing or Nginx if you prefer the classic way. We’ll cover the architecture, the compose files, the production hardening bits nobody mentions until it’s too late (hello backups and health checks), and a simple process for rolling out updates without drama. My goal is simple: help you build a setup that feels calm on Monday mornings.

Why Containerize WordPress on a Single VPS?

If you’ve ever upgraded PHP on a live server and felt your palms sweat, containerization is your friend. On a single VPS, Docker gives you a clean way to isolate your WordPress app, database, and caching layers. Think of it like a tidy toolbox: PHP and Nginx aren’t fighting over packages, MySQL has its own corner, and you can swap parts without disturbing the whole bench. The VPS is still one box, sure, but inside it, everything is wrapped and labeled.

In my experience, the three biggest wins are repeatability, safety, and speed. You can rebuild containers from scratch, track your infrastructure in version control, and roll forward or back with much less fuss. It’s not bulletproof—this is still a single machine—but it’s predictable. And predictability is worth gold when traffic spikes on a Saturday and you’re five minutes from dinner.

Here’s the thing: you also get a path to grow. Start with one site and one VPS. Then add a second site with a new hostname and a few extra lines in your compose. Later, move the database to a managed service or split the proxy to another machine. Containerization doesn’t lock you in; it gives you stepping stones.

The Blueprint: Components, Networks, and the Reverse Proxy Choice

Let’s sketch the shape of a production-ish WordPress on one VPS. You’ll have a reverse proxy on the front. That could be Traefik (my usual pick for single-box automation) or Nginx (rock solid, straightforward, and deeply tunable). Behind that edge, you’ll have the app stack: Nginx (as the web server for WordPress), PHP-FPM, MariaDB (or MySQL), and Redis for object caching. Traefik lives at the perimeter, speaks ACME/Let’s Encrypt for automatic certificates, and routes to the internal Nginx app. In the Nginx-only variant, the edge proxy is also your app web server and talks directly to PHP-FPM.

Networking is where the sanity comes from. I like to define two Docker networks: a public-facing network for the reverse proxy and an internal network for app components. Only the proxy binds to ports 80/443 on the VPS. Everything else stays tucked away, reachable only to its neighbors. Volumes hold state—mainly your database files and WordPress wp-content. Yes, containers are ephemeral; volumes are not. That separation keeps recoveries boring.

A tiny heads-up: you might be tempted to run everything in one container because “it works.” Resist the urge. Keeping Nginx, PHP-FPM, MariaDB, and Redis separate pays off when you update or troubleshoot. If PHP misbehaves, you don’t want to restart your database. Likewise, the reverse proxy can reload TLS without nudging the app. Small boundaries prevent big outages.

Traefik Path: A Compose Stack That Just Gets Out of the Way

Traefik feels like a friendly concierge. It discovers containers via Docker labels, wires routes, and fetches TLS certs automatically. For a single VPS, that saves time and mistakes. I remember the first time I flipped on Traefik’s ACME resolver and saw a new certificate land in its storage file without me touching Certbot. I exhaled. Let’s build that.

The Compose file

The following is a baseline you can adapt. It uses Traefik at the edge, an Nginx app container, PHP-FPM, MariaDB, and Redis. I’ve trimmed some options for readability, but the essentials are here.

version: '3.9'

networks:
  proxy:
    external: false
  internal:
    external: false

volumes:
  db_data:
  wp_content:
  traefik_data:

services:
  traefik:
    image: traefik:v2.10
    command:
      - --providers.docker=true
      - --providers.docker.exposedbydefault=false
      - --entrypoints.web.address=:80
      - --entrypoints.websecure.address=:443
      - [email protected]
      - --certificatesresolvers.le.acme.storage=/letsencrypt/acme.json
      - --certificatesresolvers.le.acme.httpchallenge=true
      - --certificatesresolvers.le.acme.httpchallenge.entrypoint=web
      - --log.level=INFO
    ports:
      - 80:80
      - 443:443
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - traefik_data:/letsencrypt
    restart: unless-stopped
    networks:
      - proxy
    healthcheck:
      test: ["CMD", "traefik", "version"]
      interval: 30s
      timeout: 5s
      retries: 3
    labels:
      - traefik.enable=true
      # Optional: expose dashboard behind auth if you know what you're doing
      # - traefik.http.routers.dashboard.rule=Host(`traefik.example.com`)
      # - traefik.http.routers.dashboard.service=api@internal

  nginx:
    image: nginx:alpine
    depends_on:
      - php
    volumes:
      - wp_content:/var/www/html/wp-content
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
    restart: unless-stopped
    networks:
      - proxy
      - internal
    labels:
      - traefik.enable=true
      - traefik.http.routers.wp.rule=Host(`example.com`)
      - traefik.http.routers.wp.entrypoints=websecure
      - traefik.http.routers.wp.tls.certresolver=le
      - traefik.http.services.wp.loadbalancer.server.port=80
      # Force HTTP -> HTTPS redirect
      - traefik.http.routers.wp-redirect.rule=Host(`example.com`)
      - traefik.http.routers.wp-redirect.entrypoints=web
      - traefik.http.routers.wp-redirect.middlewares=redirect-https
      - traefik.http.middlewares.redirect-https.redirectscheme.scheme=https
    healthcheck:
      test: ["CMD-SHELL", "wget -qO- http://127.0.0.1/health || exit 1"]
      interval: 30s
      timeout: 5s
      retries: 3

  php:
    image: wordpress:php8.2-fpm
    environment:
      WORDPRESS_DB_HOST: mariadb:3306
      WORDPRESS_DB_USER: wpuser
      WORDPRESS_DB_PASSWORD: supersecret
      WORDPRESS_DB_NAME: wordpress
      WORDPRESS_CONFIG_EXTRA: |
        define('WP_CACHE_KEY_SALT', 'example.com:');
        define('WP_REDIS_HOST', 'redis');
        define('WP_REDIS_PORT', 6379);
    volumes:
      - wp_content:/var/www/html/wp-content
    restart: unless-stopped
    networks:
      - internal
    healthcheck:
      test: ["CMD-SHELL", "SCRIPT_NAME=/ping REQUEST_METHOD=GET cgi-fcgi -bind -connect 127.0.0.1:9000 | grep -q 'pong'"]
      interval: 30s
      timeout: 5s
      retries: 3

  mariadb:
    image: mariadb:10.11
    environment:
      MARIADB_DATABASE: wordpress
      MARIADB_USER: wpuser
      MARIADB_PASSWORD: supersecret
      MARIADB_ROOT_PASSWORD: rootsecret
    volumes:
      - db_data:/var/lib/mysql
    restart: unless-stopped
    command: ['--innodb-buffer-pool-size=512M', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
    networks:
      - internal
    healthcheck:
      test: ['CMD-SHELL', 'mysqladmin ping -h 127.0.0.1 -uroot -prootsecret | grep -q alive']
      interval: 30s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    command: ['redis-server', '--appendonly', 'yes']
    restart: unless-stopped
    networks:
      - internal
    healthcheck:
      test: ['CMD', 'redis-cli', 'ping']
      interval: 30s
      timeout: 5s
      retries: 3

Nginx app config for WordPress + FPM

The Nginx container above mounts a conf.d directory. Here’s a minimal config you can drop into ./nginx/conf.d/site.conf. Notice the tiny health endpoint for Traefik’s check and a few sensible headers.

server {
  listen 80;
  server_name example.com;
  root /var/www/html;

  # Simple health endpoint
  location /health { return 200; }

  index index.php index.html index.htm;

  location ~* .(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf)$ {
    access_log off;
    expires 7d;
    add_header Cache-Control public;
    try_files $uri =404;
  }

  location / {
    try_files $uri $uri/ /index.php?$args;
  }

  location ~ .php$ {
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_pass php:9000;
    fastcgi_read_timeout 300s;
  }

  client_max_body_size 64m;

  add_header X-Frame-Options SAMEORIGIN always;
  add_header X-Content-Type-Options nosniff always;
  add_header Referrer-Policy no-referrer-when-downgrade;
}

I keep this version intentionally plain. Once your site is stable, you can sprinkle in micro-optimizations: gzip or Brotli (if fronted by a CDN, maybe skip), conditional caching rules for static content, and a rate limit on specific paths like xmlrpc.php. Keep an eye on simplicity—future-you will thank you.

Why Traefik feels nice for single VPS

Traefik’s auto-discovery and ACME integration let you focus on the app. Add a label, commit, deploy, and boom—new hostname works with TLS. If you’re a fan of guardrails, their docs are approachable; skim Traefik’s ACME and HTTPS guide and the basic Docker Compose docs if it’s your first time wiring networks and volumes. And when in doubt, I peek at the official WordPress Docker image notes to remind myself what environment variables are supported in each tag.

Nginx Path: The Classic, Hands-On Reverse Proxy

There are times when I choose Nginx as the edge on a single VPS. Usually it’s because I want fine-grained caching rules right at the perimeter, I need super specific headers and rewrites, or I’m working in an environment where Traefik isn’t familiar. Nginx never complains; it just serves.

In this variation, there’s no separate app Nginx. The edge Nginx handles both the reverse proxy duties and the app web serving, passing PHP to FPM. TLS can be managed with Certbot or acme.sh mounted into the container, or you can run certificate automation on the host and mount the certs as read-only. Here’s a compact Compose file and a server config.

The Compose file

version: '3.9'

networks:
  web:
  internal:

volumes:
  db_data:
  wp_content:
  certs:

services:
  nginx:
    image: nginx:alpine
    depends_on:
      - php
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - wp_content:/var/www/html/wp-content
      - certs:/etc/nginx/certs:ro
    restart: unless-stopped
    networks:
      - web
      - internal

  php:
    image: wordpress:php8.2-fpm
    environment:
      WORDPRESS_DB_HOST: mariadb:3306
      WORDPRESS_DB_USER: wpuser
      WORDPRESS_DB_PASSWORD: supersecret
      WORDPRESS_DB_NAME: wordpress
    volumes:
      - wp_content:/var/www/html/wp-content
    restart: unless-stopped
    networks:
      - internal

  mariadb:
    image: mariadb:10.11
    environment:
      MARIADB_DATABASE: wordpress
      MARIADB_USER: wpuser
      MARIADB_PASSWORD: supersecret
      MARIADB_ROOT_PASSWORD: rootsecret
    volumes:
      - db_data:/var/lib/mysql
    restart: unless-stopped
    networks:
      - internal

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    networks:
      - internal

Nginx server with TLS and FastCGI

This example expects you to populate certs in /etc/nginx/certs. You can do that via Certbot on the host, acme.sh, or a sidecar container that renews and writes to the shared volume. The TLS bit is intentionally basic—no rocket science required.

server {
  listen 80;
  server_name example.com;
  return 301 https://$host$request_uri;
}

server {
  listen 443 ssl http2;
  server_name example.com;

  ssl_certificate     /etc/nginx/certs/fullchain.pem;
  ssl_certificate_key /etc/nginx/certs/privkey.pem;

  root /var/www/html;
  index index.php index.html;

  location / {
    try_files $uri $uri/ /index.php?$args;
  }

  location ~* .(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf)$ {
    access_log off;
    expires 7d;
    add_header Cache-Control public;
    try_files $uri =404;
  }

  location ~ .php$ {
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_pass php:9000;
    fastcgi_read_timeout 300s;
  }

  client_max_body_size 64m;
  add_header X-Frame-Options SAMEORIGIN always;
  add_header X-Content-Type-Options nosniff always;
  add_header Referrer-Policy no-referrer-when-downgrade;
}

With Nginx at the edge, you have one fewer moving piece compared to Traefik. The trade-off is that you’ll do more by hand—especially ACME and virtual hosts if you add multiple domains. For folks who like to read their configs top to bottom and know exactly what’s happening, that’s a feature, not a bug.

Production Ops: TLS, Backups, Security, Monitoring

Once the app is up, the real game begins: keeping it boring. Boring in a good way. The kind where updates happen mid-week and you still go to lunch on time. Here’s how I make that happen on a single VPS.

TLS and ACME that you don’t dread

If you’re on Traefik, ACME is largely handled by labels and a resolver. Make sure port 80 is open for the HTTP-01 challenge, and confirm your DNS A record points at the VPS. It’s worth five minutes to scan Traefik’s ACME docs so you know where certs are stored and how renewals are logged. On Nginx, pick a renewal tool you like—Certbot or acme.sh are both fine—and script it so you don’t rely on memory the next time a cert expires.

Backups you can restore in your sleep

Backups are a story you only care about after the plot twist. I learned this the hard way when a plugin update went sideways and a client asked how fast we could rewind. On a single VPS with Docker, your targets are clear: the database volume and the wp-content volume. My nightly routine is a mysqldump to a timestamped file, plus a tarball of wp-content, both shipped off-server. Weekly, I test a restore to a little throwaway VM or a spare port on the same VPS with the DNS turned off—just to make sure the process is a muscle, not a fantasy.

If you’re curious about resilient, application-consistent database snapshots—especially when you graduate beyond mysqldump—have a look at this deep dive on taking application-consistent hot backups with LVM snapshots. It pairs nicely with a containerized stack, and it’s saved me more than once.

Security that doesn’t slow you down

Think in layers. Only the proxy should bind public ports. Put the database and Redis on the internal network only. Set strong database credentials and avoid exposing phpMyAdmin in the open unless you’ve put it behind auth and IP allowlists. File permissions matter: the container that writes to wp-content should be the only one with write permissions there; everything else reads. Most official images run as non-root internally now, but double-check and set the user when in doubt.

At the proxy layer, add sane security headers, limit request body sizes, and consider a rate limit for login and XML-RPC endpoints. On Nginx it’s a few lines; on Traefik, middlewares make it easy. For WordPress itself, disable file editing in the dashboard, keep themes and plugins lean, and update routinely. The best security control is fewer moving parts—and that includes fewer plugins.

Monitoring and logs that tell a story

You don’t need a full SIEM to keep an eye on a single VPS. Start with container health checks and logs. Most issues show up as a failing healthcheck or a growing error log. I like collecting logs locally with rotation and shipping important ones to a simple remote target or a hosted log service. When you’re ready, toss in a lightweight metrics stack or a hosted monitor to watch CPU, memory, and response times. Alerts should be specific: if the database goes down, tell me. Don’t page me for normal traffic spikes—the point of this setup is to handle those calmly.

Deploying Changes Without Drama: Blue/Green on One VPS

Here’s a trick I wish I’d learned sooner. Even on a single VPS, you can do a simple blue/green release by running a second version of the app on a different internal port and switching the reverse proxy route when it’s ready. With Traefik, that’s as easy as spinning up a second nginx-php pair with labels that point to a different service name, validating on a hidden hostname, then flipping the main router label. With Nginx, you swap upstreams and reload. It’s not high ceremony, but it’s clean.

For WordPress core updates, I prefer baking updates into the image or running updates during a brief maintenance window. For plugin/theme updates, batch and test. One of my clients once ran twenty-five plugins because “more features.” We trimmed that by half. The site ran faster, the admin was lighter, and updates stopped feeling like roulette.

Scaling up from here

A single VPS can carry a lot if it’s tuned well: Redis object caching, PHP-FPM pool sizing that matches your memory, and a reverse proxy that isn’t doing cartwheels. If you do hit the ceiling, the next moves are predictable: front with a CDN, migrate the database to a managed service, or split the proxy and app across two small VPSes. Because you’ve containerized everything, rehoming pieces is more of a logistics ride than an overhaul.

A Few Notes That Make Real Life Easier

Let me share a handful of tiny decisions that have saved me hours:

First, keep your .env secrets out of your repo, and if you can, move database passwords to Docker secrets mounted as files your entrypoint reads. It’s not perfection, but it’s better than piling secrets into Git. Second, resist auto-updaters like Watchtower on production WordPress; it’s tempting, but auto-upgrading the wrong thing on a Saturday morning is a bad magic trick. Schedule updates and know what changed. Third, try not to layer too many caches. Redis object caching is great. A CDN can help. A full-page cache inside Nginx can be powerful—but only add it with intention so you don’t chase ghost invalidations.

Finally, document your runbook. “How do we restore? How do we rotate certs? How do we redeploy?” Even if it’s just a README in your repo, writing it once makes your future self very happy. I keep mine in plain text with copy-paste commands. There’s no prize for doing it from memory.

Putting It All Together: Your Next Steps

Here’s how I’d approach this tomorrow if I were starting from scratch. Choose Traefik if you want automated routing and certs without fiddling; choose Nginx if you want to handcraft everything and you’re comfortable with ACME tooling. Create two Docker networks—proxy/web and internal. Stand up the database and Redis first and make sure they’re healthy. Then bring up PHP-FPM and Nginx (or just Nginx+FPM if it’s the classic path). Point your DNS to the VPS, open ports 80/443, and ensure the ACME flow works. Add a firewall rule to keep everything else closed.

Then install WordPress, lock down the basics, and take your first backup the minute it’s clean. Try restoring to a temporary directory or a separate stack name. Once that’s muscle memory, you’re not stuck—you’re confident. That’s the difference. From there, rinse and repeat: small updates, small tests, small wins.

Common Troubleshooting Moments (and Quick Fixes)

I’ve bumped into a few patterns worth keeping handy:

When ACME fails on Traefik, it’s almost always DNS or port 80. Double-check your A record, ensure no other service is binding port 80, and watch the Traefik logs; they’re usually clear about what’s wrong. If the WordPress container can’t reach the database, test connectivity with a quick ping or mysql client inside the network. Docker DNS is reliable, but typos in service names happen. If uploads fail or are truncated, bump client_max_body_size in Nginx and make sure PHP’s post_max_size and upload_max_filesize are higher than your actual uploads.

For performance worries, start simple: enable Redis object caching, tune PHP-FPM max_children to match your memory, and watch slow logs for a day before reaching for exotica. And if something feels off after an update, roll back fast and investigate calmly. You don’t have to solve a mystery during peak traffic.

Wrap-Up: A Calm, Reliable WordPress on One VPS

Containerizing WordPress on a single VPS isn’t about being trendy. It’s about getting control over the stack so you can sleep a little better. With Docker, Traefik or Nginx, and a few clean habits—backups you’ve tested, minimal plugins, reasonable caching—you can run a production site that behaves even when traffic isn’t polite. I’ve watched nervous teams relax once they saw how predictable updates and rollbacks became. It’s not magic. It’s just putting each piece where it belongs.

If you’re migrating from a tangle of shared hosting or a one-off VPS build, take it step by step. Start with the baseline, get it stable, then layer on the polish. And when in doubt, aim for boring. Boring is fast, boring is safe, and boring gets you home for dinner.

Hope this was helpful. If you have questions or want me to take a look at your compose file, ping me. See you in the next post.

Frequently Asked Questions

Absolutely. With Traefik, add another nginx/app service or reuse the same app with a new router label and hostname; Traefik will fetch certs and route traffic. With Nginx-only, create another server block and wire it to the same PHP-FPM or a separate one if you want isolation. Just keep an eye on CPU, RAM, and database capacity as you add sites.

In practice, no—if you set it up sanely. The overhead is tiny compared to the wins in isolation and repeatability. Performance usually comes down to PHP-FPM tuning, Redis object caching, and avoiding heavy plugins. I’ve moved plenty of sites into containers and watched them run faster simply because the stack was cleaner.

Back up two things: the database and wp-content. Nightly mysqldump plus a tar of wp-content, shipped off-server, covers most needs. Test restoring to a temporary stack so you know it works before you need it. If you want application‑consistent snapshots for busier databases, consider LVM snapshots and a short fs freeze; it pairs nicely with containers and makes restores predictable.