Technology

My No‑Drama Playbook for Deploying Laravel on a VPS: Nginx, PHP‑FPM, Queues/Horizon and Truly Zero‑Downtime Releases

So there I was, sipping a late coffee while a client nervously asked, “Can we deploy without taking the site down?” If you’ve ever watched a spinning loader during a deploy and prayed your users wouldn’t notice, you’re not alone. I’ve been there. I’ve pushed code and then stared at the terminal like it owed me money. Over time, I pieced together a reliable flow that keeps Laravel deployments smooth on a VPS, even when traffic spikes at the worst possible moment. In this article, I want to walk you through that playbook: Nginx as the steady front door, PHP‑FPM doing the heavy lifting, Horizon keeping queues in line, and a zero‑downtime release strategy that feels like flipping a light switch.

We’ll go step by step, but not like a dry checklist. Think of it as a friend showing you what worked, what didn’t, and the tiny details that quietly make everything robust. We’ll set expectations, align the moving parts, and get to a place where your deploys feel boring — the best compliment a deployment can get.

The Mental Model: How a Laravel App Lives on a VPS

Before touching configs, let’s agree on a mental model. Your VPS is a small apartment building. Nginx sits at the lobby door, greeting every visitor and directing them to the right room. PHP‑FPM is upstairs in the kitchen, cooking every dynamic request to order. Horizon is the team in the back room handling background jobs so the people in the lobby aren’t stuck waiting for their latte. And your deploy process is the quiet hallway swap that happens so fast the tenants don’t notice the floors got polished.

Here’s the thing: once you think of it like that, you stop stuffing everything into one big process. Nginx serves static files quickly and hands dynamic requests to PHP‑FPM. PHP‑FPM pools let you tune capacity without touching the front door. Queues keep slow tasks out of the request lifecycle. And deployments become replacing a symlink, not overwriting files in place while people are walking through the door. We’ll wire it all together gently, then tune it with a few production lessons I learned the hard way.

Nginx + PHP‑FPM: The Calm, Fast Front Door

When I set up Nginx for Laravel, my goal is simple: keep it predictable. I like clean server blocks, a clear document root, and a fastcgi pass that doesn’t try to be clever. I also avoid spraying rewrite rules everywhere. Laravel already knows how to route — we just need to hand requests to index.php safely.

Server Block I Keep Coming Back To

This server block does a few important things. It serves real files directly, hands everything else to PHP, and respects the path info without exposing it.

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    root /var/www/myapp/current/public;
    index index.php;

    # If you terminate TLS here, upgrade to HTTPS and add HSTS in SSL block
    # return 301 https://$host$request_uri;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ .php$ {
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
        fastcgi_param DOCUMENT_ROOT $realpath_root;
        fastcgi_pass unix:/run/php/php8.2-fpm.sock; # match your PHP version
        fastcgi_index index.php;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
    }

    location ~* .(?:jpg|jpeg|gif|png|webp|svg|css|js|ico|woff2?)$ {
        expires 30d;
        access_log off;
        add_header Cache-Control "public, max-age=2592000";
    }

    client_max_body_size 20m; # tune for uploads

    # Useful security headers; tune for your app
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
}

Two subtle details: first, SCRIPT_FILENAME and DOCUMENT_ROOT use $realpath_root. That’s a quiet win when you deploy with symlinks because it resolves to the actual release folder. Second, I keep the cache headers for static assets generous. Mix or Vite fingerprints your files, so long cache lives are your friend.

If you’re going to terminate TLS on the server (many do), you’ll want to harden that layer as well. I’ve shared a practical checklist of TLS 1.3, OCSP stapling and Brotli settings for Nginx I keep reusing. It’s the kind of thing you configure once and smile every time you run a test.

PHP‑FPM: The Kitchen That Never Panics

PHP‑FPM is where capacity planning quietly happens. In my experience, a single busy queue worker stuck on a heavy job can starve the pool if you share it with web requests. I like to separate pools for “web” and “workers” so the front door never gets blocked. Here’s a simple web pool to get started:

[www]
user = www-data
group = www-data
listen = /run/php/php8.2-fpm.sock
pm = ondemand
pm.max_children = 20
pm.process_idle_timeout = 10s
pm.max_requests = 1000

; Friendly slowlog for surprises
request_terminate_timeout = 60s
request_slowlog_timeout = 3s
slowlog = /var/log/php8.2-fpm/www-slow.log

; OPcache is configured in php.ini

Sometimes I’ll switch to pm=dynamic for long-lived workloads; sometimes I also cap pm.max_children tightly to protect memory. It’s not a law; it’s tuning. A quick sanity check is to match the number to what your RAM and PHP memory_limit can realistically support. If you want a deeper dive into how I think about pools, OPcache, and Redis for Laravel, I wrote the production tune-up I do on every Laravel server that might be a handy side read.

The Shared Folders and Permissions That Save You Headaches

Before we even deploy, set up the directory structure that makes zero-downtime feel easy. I like the classic releases/current/shared layout. It’s a trick I picked up after one too many deploys that mixed runtime files with source code.

/var/www/myapp
├── current -> /var/www/myapp/releases/2024-11-05-121500
├── releases
│   ├── 2024-11-05-121500
│   └── 2024-11-04-230501
└── shared
    ├── storage
    └── .env

Laravel’s storage and .env should live in shared, then symlinked into each release. That way you can deploy code without touching logs, caches, temporary files, or your environment configuration. It’s mundane, but it’s everything.

For permissions, I keep it simple: the code belongs to a deploy user, and the web user (often www-data) has write access only to storage and bootstrap/cache. I suppose you can script this from your deploy flow, but I like to keep a tiny helper to enforce state:

# grant web user group access to writable paths
chgrp -R www-data storage bootstrap/cache
chmod -R ug+rwx storage bootstrap/cache

# ensure files aren’t world writable by mistake
chmod -R o-rwx storage bootstrap/cache

It’s boring, which is precisely why it’s good. Clean boundaries keep deployments predictable.

Zero‑Downtime Releases: The Quiet Switch

Let me tell you about the first time I swapped a symlink for a Laravel app in production. It felt like magic. No reload banner, no gateway errors, just quiet success. The trick is to prep everything in a new release folder, verify it, and then atomically switch current to point at it. If something goes wrong, you point back. No drama.

The Flow I Use (And Why It’s Calm)

I keep a small deploy script that does the same thing every time: create a timestamped release directory, rsync the code, composer install with flags that respect production, link shared storage and env, build assets if needed, warm caches, run migrations safely, then atomically swap the symlink. Here’s a friendly skeleton you can tailor:

#!/usr/bin/env bash
set -euo pipefail

APP_DIR=/var/www/myapp
RELEASES=$APP_DIR/releases
SHARED=$APP_DIR/shared
NOW=$(date +"%Y-%m-%d-%H%M%S")
NEW_RELEASE=$RELEASES/$NOW

mkdir -p "$NEW_RELEASE"

# Sync code (from CI workspace or git checkout)
rsync -a --delete --exclude=node_modules --exclude=.git ./ "$NEW_RELEASE/"

# Composer install in the release
cd "$NEW_RELEASE"
composer install --no-interaction --prefer-dist --optimize-autoloader --no-dev

# Link shared files
ln -nfs "$SHARED/.env" "$NEW_RELEASE/.env"
rm -rf "$NEW_RELEASE/storage"
ln -nfs "$SHARED/storage" "$NEW_RELEASE/storage"

# Build assets if you build on server (many prefer CI build+upload)
# npm ci && npm run build

# Laravel cache warm-up
php artisan config:cache
php artisan route:cache
php artisan view:cache

# Database migrations (safe mode)
php artisan migrate --force

# Atomically swap current
ln -nfs "$NEW_RELEASE" "$APP_DIR/current"

# Reload PHP-FPM to clear opcache if needed
sudo systemctl reload php8.2-fpm

# Optionally clean old releases, keep last 5
ls -1dt $RELEASES/* | tail -n +6 | xargs -r rm -rf

The atomic ln -nfs swap is the star of the show. It’s nearly instantaneous, and Nginx doesn’t need to reload to keep serving. If you aggressively cache routes and config, reloading PHP‑FPM is sometimes handy to clear OPcache, but even that’s often optional when you bump opcache settings thoughtfully.

One small habit that pays off: run migrations with –force, but also design migrations to be safe for zero-downtime. That means adding columns before backfilling, using nullable defaults where needed, and avoiding destructive changes during traffic. If I must do something risky, I schedule it for a low-traffic window or wrap it in a feature flag.

What About Env Changes?

I don’t like baking env changes into the code. Treat .env as part of the server state. If a deploy depends on new env keys, I add them to shared first, deploy, and then confirm the app reads them. This avoids weird edge cases where config caching or env loading behaves differently than expected.

Queues and Horizon: Keeping Heavy Lifting Off the Front Door

Laravel is fast when you keep web requests light. Anything slow goes into a queue: sending emails, generating reports, crunching PDFs, hitting slow external APIs. Horizon makes queues feel like a dashboard with brains. You can watch workers, retry failed jobs, and scale smoothly.

Systemd Services I Trust

While Supervisor works, I’ve grown to prefer systemd on modern distros. It’s built-in, simple to reason about, and works nicely with logs. I keep separate services for Horizon and for one-off queue workers when I need them.

# /etc/systemd/system/horizon.service
[Unit]
Description=Laravel Horizon
After=network.target

[Service]
User=www-data
Group=www-data
Restart=always
ExecStart=/usr/bin/php /var/www/myapp/current/artisan horizon
ExecStop=/usr/bin/php /var/www/myapp/current/artisan horizon:terminate
WorkingDirectory=/var/www/myapp/current
Environment=APP_ENV=production
KillSignal=SIGTERM
TimeoutStopSec=60

[Install]
WantedBy=multi-user.target

Start and enable it with:

sudo systemctl daemon-reload
sudo systemctl enable --now horizon

If you prefer dedicated workers for specific queues or experiments, you can define a focused service:

# /etc/systemd/system/queue-default.service
[Unit]
Description=Laravel Queue Worker (default)
After=network.target

[Service]
User=www-data
Group=www-data
Restart=always
ExecStart=/usr/bin/php /var/www/myapp/current/artisan queue:work --queue=default --sleep=1 --tries=3 --max-time=3600
WorkingDirectory=/var/www/myapp/current
Environment=APP_ENV=production
KillSignal=SIGTERM
TimeoutStopSec=60

[Install]
WantedBy=multi-user.target

Horizon makes scaling and balancing across queues ridiculously pleasant. You define your queue names, map them into Horizon’s configuration, and then let it orchestrate workers based on load. The moment I added Horizon to team workflows, troubleshooting went from guesswork to “oh, that job’s stuck because the API is slow; let’s retry with backoff.” If you’re new to it, the official Horizon docs are a great walkthrough.

Zero‑Downtime Deploys With Horizon Running

Here’s where a tiny bit of choreography matters. During deployment, you want to switch code after the new release is ready and before workers pick up jobs that depend on the new code. Horizon has a built-in stop signal that’s perfect for this. My usual rhythm looks like this:

First, pause consumption by gently asking Horizon to terminate. It will finish current jobs and exit. Then deploy. Then bring Horizon back up on the new code.

# Before the symlink swap
php /var/www/myapp/current/artisan horizon:pause || true
php /var/www/myapp/current/artisan horizon:terminate || true

# Run your deployment steps and symlink swap here

# After the swap
sudo systemctl restart horizon

If you have critical real-time queues, you can run a blue-green pattern for workers as well: start workers pointed at the new release, wait a beat, then stop the old ones. But honestly, with Horizon and short-running jobs, the simple pause-terminate-restart flow works beautifully.

Retries, Failures, and Preventable Pain

I’ve had my share of 3 a.m. alerts because a job kept retrying until the queue looked like a Jenga tower. A few small habits prevent that: time-box jobs with timeout, use sensible tries or backoff strategies, and add idempotency to jobs that hit external APIs. Laravel’s unique jobs make it hard for duplicates to slip in, and a Release strategy on certain exceptions can be kinder than a hard fail. Most importantly, separate your queues by purpose. Don’t let “emails” fight for space with “image-processing”. Give them names and tune their worker counts accordingly in Horizon.

Config Caches, OPcache, and a Quick Word on Octane

On every deploy, I warm up configuration, routes, and views. It’s quiet performance you can feel. OPcache then holds your PHP bytecode in memory so you’re not re-parsing files on every request. Make sure OPcache memory size fits your codebase, and enable opcache.validate_timestamps=0 if you only reload PHP‑FPM on deploys. It’s a neat trick: treat PHP’s lifecycle like a webserver, and use deploy-time reloads to tell it “new code is here.”

Octane is incredible when you’re going for raw throughput, but it changes the game: long-lived workers mean you must carefully handle in-memory state and bootstrapping. If you’re not ready for that, stick with FPM first and build muscle memory. When you feel the need for more speed, move gracefully into Octane with a dry run and monitoring. I talk about how I weave these together in my Laravel production tune-up checklist. It’s worth a skim before you flip the switch.

Rolling Back Without Breaking a Sweat

Nobody loves rolling back, but a calm plan beats heroics. With release directories, rollback is simply pointing current to the previous release. If migrations changed the schema in a breaking way, that’s where the real danger lives. I avoid destructive migrations during peak hours and, when needed, I ship reversible migrations with a tested down() path. If you must roll back both code and schema, do it in a maintenance window or in two stages: first make the schema backward-compatible, then deploy the old code. Think of schema like rails: the train follows whatever track is down.

When I deploy high-risk features, I’ll layer in a feature flag. That way I can disable the feature instantly while leaving the code in place. Flags are a pressure valve for your nerves.

Logs, Metrics, and Knowing When Something’s Off

What’s the point of a smooth deployment if you don’t know when something quietly broke? At a minimum, aggregate logs and make sure you can filter by release. Laravel’s contextual logging is a lifesaver here — attach the release id via a log channel or a global scope during bootstrap and you’ll thank yourself later.

On the metrics side, I love combining system-level monitoring with app-level signals. CPU, RAM, disk IO, and 5xx rates tell you when the server is unhappy. Horizon’s dashboard tells you when jobs are piling up. I shared an easy starting path in my guide to getting Prometheus, Grafana, and Uptime Kuma running. Even a simple uptime and latency alert is worth its weight in sleep.

Security and TLS Without the Drama

It’s tempting to postpone security until “later”. Don’t. Make it part of day one. Keep your OS packages patched, lock down SSH with keys, and put a firewall in front of the world. On the web stack, tighten your TLS, add sane security headers, and think about rate limits at the edge for obvious brute-force points. If you’re running a WAF, tune it to be helpful instead of noisy. I’ve written about how I keep WAF rules calm and fast in a practical ModSecurity + OWASP CRS tuning guide.

And because it’s related to performance and trust, give your TLS config the same love as your deploy script. If you’re curious, the official Nginx documentation is a friendly reference once you have a working baseline.

Backups, Snapshots, and “Oops” Protection

If you’re not backing up, you’re debugging your future self’s bad day. I like a mix: database dumps with retention, app files in offsite storage, and occasional VPS-level snapshots for fast restores. Encryption and versioning keep me honest. Most teams don’t need enterprise tooling here — a simple, reliable flow you test once a month beats fancy dashboards you never try. If you want a friendly walkthrough, I put together a guide to Restic/Borg with S3-compatible storage that covers versioning, encryption, and sensible retention without the drama.

Putting It All Together: A Day in the Life of a Deploy

Let’s turn this into a quiet story. Your CI finishes tests and builds assets. It ships a tarball or git ref to the server. The deploy script creates a new release directory, installs composer dependencies, links the shared storage and env, warms caches, and runs migrations that were designed to be safe. Horizon gets a polite pause and terminate, then the symlink flips. Nginx keeps serving because it never lost its footing. If you reload PHP‑FPM, it’s a blip nobody notices. Horizon comes back up and consumes new jobs with the new code. Monitoring shows a small dip, then a steady line. You get to log off at a normal hour.

That’s the whole point. You’re not searching for the perfect tool as much as you’re building a calm routine. Once you’ve done it a couple of times, you’ll wonder how you ever lived without release directories and a one-command symlink swap.

Troubleshooting: The Little Things That Bite

I’ve learned to watch for a few quiet footguns. One, the Nginx try_files line must end with /index.php?$query_string or you’ll fight 404s for routes that should work. Two, if static assets aren’t updating after deploys, your browser is probably doing its job too well. Mix/Vite versioning fixes that with cache-busting file names. Three, if Horizon seems alive but isn’t picking jobs, check the connection to your queue backend and verify the queue names match what your jobs are using. Four, if you see random 502s under load, check PHP‑FPM pool limits and your error logs for slow queries or stalled external calls.

And finally, don’t be shy about testing your whole deploy in a staging VPS. A dry run where you practice the symlink flip and verify the app boots is worth its weight in sanity. It’s not glamorous, but neither is firefighting at midnight.

A Quick Note on Hardware Sizing

People often ask, “How big should my VPS be?” The honest answer is “just big enough, and then an extra 20% for breathing room.” CPU for PHP‑FPM workers, RAM for cache and comfortable process counts, and fast NVMe for snappy IO. If you like thinking this through with real-world context, I wrote about how I choose VPS specs for Laravel, WooCommerce, and Node.js. It covers how I read CPU, RAM, and storage signals so I don’t pay for noise.

Extras That Make Life Nicer

Once the basics are solid, you can layer in niceties. A health check endpoint that returns 200 when caches are warm and queues are flowing. Preloading classes in PHP 8.2 for faster cold starts. A release manifest in your logs so you can trace errors back to the exact build. And if you’re behind a CDN, set cache-control and edge rules that actually match how your app behaves. Even small wins add up when the foundation is strong.

Wrap-Up: The Calm After the Deploy

I still remember the first time I watched a Laravel app switch releases mid-traffic and nobody noticed. Not the team, not the users, not the error tracker. It was quiet, which is the highest compliment in ops. If you wire up Nginx and PHP‑FPM cleanly, keep queues away from the front door with Horizon, and deploy using releases with an atomic symlink, you’re already in the top tier of calm deployments. Add monitoring, backups, and a few guardrails around migrations, and you’ve got a setup that ages well.

Start simple, make it boring, and grow from there. If anything here sparked a question or you want me to share a deeper dive on any step, I’m all ears. Hope this playbook helps your next deploy feel like a quiet victory. See you in the next post!

Further Reading and Handy Docs

For reference and deeper details, the Laravel deployment docs are a great baseline, and the Nginx documentation is surprisingly readable once you’re past your first server block. If TLS tuning is on your list, my Nginx TLS and Brotli guide is a practical companion.

Frequently Asked Questions

Great question! Build each release in a new timestamped folder, link shared storage and .env, warm caches, run safe migrations, then atomically switch the current symlink. If you use Horizon, pause/terminate before the swap and restart after. Nginx keeps serving because the document root points to current, which changes instantly.

Both work, but I like systemd on modern distros. It’s built in, easy to restart/reload, logs cleanly, and plays nicely with service dependencies. Define a horizon.service and enable it so workers come back on reboot. If you prefer Supervisor, that’s fine too—just keep the configs simple and reliable.

Three top ones: a try_files rule that doesn’t end with /index.php?$query_string leading to 404s, PHP‑FPM pools starved by long-running jobs (separate web and worker pools or use Horizon), and migrations that break live traffic. Design migrations for zero‑downtime, tune pools thoughtfully, and keep static assets fingerprinted.