So there I was on a Friday evening, staring at a progress bar that felt like it was holding my entire weekend hostage. You know that sinking feeling when a deploy hits a snag and the site returns a gateway timeout? I’ve been there more times than I’d like to admit. The fix that finally calmed my deployments wasn’t a shiny new tool or a fancy platform. It was a humble trio working together: rsync to move files efficiently, symlinked releases to swap versions atomically, and systemd to keep my app neatly supervised. Tie that into GitHub Actions or GitLab CI, and you’ve got a zero‑downtime pipeline that just… works.
Ever had that moment when you hit “deploy” and immediately watch your uptime graph dive? I certainly have, and it’s what nudged me into building a process that’s both simple and forgiving. In this guide, we’re going to set up a zero‑downtime CI/CD flow to a VPS using GitHub Actions or GitLab CI, rsync for file sync, symlinked releases for atomic switches, and systemd services for process management. I’ll share the exact structure I use, the scripts that make it tick, and the little safety checks that keep things calm. We’ll also talk about rollbacks (because we’re not superheroes) and the gotchas that inevitably show up.
İçindekiler
- 1 The Mental Model: Why Zero‑Downtime Works Like a Costume Change
- 2 Preparing the VPS: One Calm Home for All Your Releases
- 3 The Release Script: rsync, Build, Health Check, Atomic Switch
- 4 rsync From CI: Getting Files to the Server Fast (and Safely)
- 5 GitHub Actions: A Friendly Workflow That Just Works
- 6 GitLab CI: A Parallel Path With Familiar Moves
- 7 Health Checks, Migrations, and the “If Something Feels Risky” Button
- 8 Rollbacks: The Five‑Minute Seatbelt
- 9 Permissions, Ownership, and Other Gotchas
- 10 Security and Secrets: Keep the Keys Where They Belong
- 11 Graceful Reloads: Making systemd and Your App Shake Hands
- 12 Observability: The Canary That Sings Before Users Do
- 13 HTTP/2, HTTP/3, and Other Post‑Deploy Polishing
- 14 Backups and Safety Nets: Sleep Better, Deploy Happier
- 15 Extra Notes and Little Tricks from Real Projects
- 16 The Minimal, Repeatable Checklist
- 17 Wrap‑Up: A Calm Pipeline You’ll Actually Trust
The Mental Model: Why Zero‑Downtime Works Like a Costume Change
Here’s the thing: zero‑downtime releases aren’t about making your code perfect; they’re about changing versions in a way users don’t notice. Think of a theater performance: the actors don’t pause the play to change outfits. They step offstage, swap costumes, and walk back in like nothing happened. That’s what symlinked releases give you.
On your VPS, you’ll keep multiple releases in a releases directory. Each release is a timestamped folder with the built code. There’s also a shared directory for things that persist across releases—uploads, environment files, caches. And then there’s a current symlink, which points to whichever release is live. The magic move is swapping that symlink atomically once the new release is ready. It’s a blink, not a pause.
Why rsync? Because it’s efficient, battle-tested, and humble. Rather than rebuilding everything on the server, we ship only the Changes That Matter™ over SSH, and keep ownership and permissions tidy. And systemd? It’s our stage manager. It supervises your app process, restarts it on failure, and lets us do graceful reloads when your app supports it. Together, these pieces let you deploy without users ever seeing a white screen or a 502.
Preparing the VPS: One Calm Home for All Your Releases
Create the structure and a deploy user
I like to keep things predictable. On the server, I’ll create a dedicated user (say, deploy) with SSH access and a tidy folder structure:
sudo adduser --disabled-password --gecos "" deploy
sudo mkdir -p /var/www/myapp/{releases,shared}
sudo chown -R deploy:deploy /var/www/myapp
Inside shared, you’ll keep persistent things. For a PHP/Laravel app, that might be storage and .env. For a Node app, maybe a .env and an uploads directory. The point is: your app can be swapped in and out while the stuff that should survive deploys stays put.
Systemd service: the quiet supervisor
Even if you’re using Nginx or Caddy out front, let systemd manage your app layer. Here’s a clean, generic service unit. Customize the paths and Exec commands for your stack:
[Unit]
Description=MyApp Service
After=network.target
[Service]
Type=simple
User=deploy
Group=deploy
WorkingDirectory=/var/www/myapp/current
# Example for Node (adjust for your stack):
ExecStart=/usr/bin/node server.js
# If your app supports graceful reload on SIGHUP, define ExecReload
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=2
EnvironmentFile=-/var/www/myapp/shared/.env
# Optional: Keep logs in journal or redirect via stdout/stderr
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target
Enable and start it once so the unit exists, even before your first release:
sudo systemctl daemon-reload
sudo systemctl enable myapp.service
sudo systemctl start myapp.service
If you’re curious about unit options and reload behavior, the systemd service documentation is an excellent reference and worth bookmarking.
Web server routing to the current release
Point your web server root (or upstream) to /var/www/myapp/current/public for PHP/Laravel or /var/www/myapp/current for Node or other frameworks. The idea is to keep the web server oblivious to deploys. It just follows the current symlink.
The Release Script: rsync, Build, Health Check, Atomic Switch
Let’s talk about the heart of it all: the deploy script that runs on the server. CI will push files over SSH with rsync and then call this script remotely. We’ll assemble it piece by piece, but here’s the big picture:
- Create a new timestamped release directory.
- rsync files into that directory.
- Link shared resources (like .env, uploads).
- Install dependencies and build (if applicable).
- Warm caches, run migrations safely (if you must).
- Health check the new release.
- Atomically switch the current symlink to the new release.
- Reload or restart the systemd service gracefully.
- Clean up old releases.
Here’s a practical script that I’ve used—and tweaked—across many apps:
#!/usr/bin/env bash
set -euo pipefail
APP_DIR="/var/www/myapp"
RELEASES_DIR="$APP_DIR/releases"
SHARED_DIR="$APP_DIR/shared"
TIMESTAMP=$(date +%Y%m%d%H%M%S)
RELEASE_DIR="$RELEASES_DIR/$TIMESTAMP"
CURRENT_LINK="$APP_DIR/current"
KEEP_RELEASES=5
SERVICE="myapp.service"
log() { echo "[deploy] $1"; }
log "Creating release directory: $RELEASE_DIR"
mkdir -p "$RELEASE_DIR"
log "Linking shared resources"
# Example shared items (adjust for your stack)
ln -sfn "$SHARED_DIR/.env" "$RELEASE_DIR/.env"
mkdir -p "$SHARED_DIR/uploads"
ln -sfn "$SHARED_DIR/uploads" "$RELEASE_DIR/uploads"
# If PHP/Laravel:
# ln -sfn "$SHARED_DIR/storage" "$RELEASE_DIR/storage"
# mkdir -p "$SHARED_DIR/storage" && chown -R deploy:deploy "$SHARED_DIR/storage"
log "Installing dependencies (if needed)"
# Example for Node:
if [ -f "$RELEASE_DIR/package.json" ]; then
(cd "$RELEASE_DIR" && npm ci --omit=dev)
(cd "$RELEASE_DIR" && npm run build || true)
fi
# Example for PHP/Laravel:
# if [ -f "$RELEASE_DIR/composer.json" ]; then
# (cd "$RELEASE_DIR" && composer install --no-dev --optimize-autoloader)
# (cd "$RELEASE_DIR" && php artisan config:cache && php artisan route:cache || true)
# fi
log "Optional migrations"
# If you must migrate, keep them backward-compatible.
# e.g., (cd "$RELEASE_DIR" && php artisan migrate --force)
log "Warming up the app (optional)"
# e.g., curl --fail -sS http://127.0.0.1:3000/health || true
log "Switching current symlink atomically"
ln -sfn "$RELEASE_DIR" "$CURRENT_LINK"
log "Reloading systemd service"
sudo systemctl reload "$SERVICE" || sudo systemctl restart "$SERVICE"
log "Cleaning up old releases"
ls -1dt "$RELEASES_DIR"/* | tail -n +$((KEEP_RELEASES+1)) | xargs -r rm -rf --
log "Deployment complete!"
A few notes from the trenches:
First, always assume migrations can hurt if they’re not backward‑compatible. If you’re adding columns or tables, life is easy. If you’re dropping columns that the old code still expects, you’re asking for trouble mid‑deploy. Feature flags and two‑step migrations are your friend.
Second, the ln -sfn move is atomic when the symlink stays on the same filesystem. That’s why we keep everything inside /var/www/myapp. No cross-device shenanigans.
Third, reload vs restart: if your app supports a graceful reload on SIGHUP, use it. Otherwise, a quick restart right after switching the symlink is usually fine and shouldn’t cause downtime when your web server keeps connections steady for a heartbeat.
rsync From CI: Getting Files to the Server Fast (and Safely)
Now to the rsync part. On your CI runner, you’ll checkout your code, build artifacts if needed, and then rsync to the new release path on the server. I usually rsync to a temporary directory first, then let the remote deploy script do the linking and switching. Here’s a neat, minimal rsync step you can adapt:
rsync -az --delete
-e "ssh -o StrictHostKeyChecking=yes -p 22"
--exclude ".git"
./ deploy@your-server:/var/www/myapp/releases/$TIMESTAMP/
Two notes I never skip: include –delete to avoid cruft building up, and set StrictHostKeyChecking to prevent man-in-the-middle surprises. You can pre‑seed known_hosts in CI using ssh-keyscan. It’s an extra step, but one of those worth‑it steps.
GitHub Actions: A Friendly Workflow That Just Works
GitHub Actions has become my go‑to for small to midsize teams. Secrets are easy to manage, and the YAML is readable. Here’s a template that assumes you’re shipping a Node app; adapt the build steps to your stack. It rsyncs the code, then calls the remote deploy script we wrote earlier.
name: Deploy to VPS
on:
push:
branches: [ "main" ]
concurrency:
group: production-deploy
cancel-in-progress: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set release timestamp
id: set_ts
run: echo "ts=$(date +%Y%m%d%H%M%S)" >> $GITHUB_OUTPUT
- name: Install Node (if needed)
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Build artifacts
run: |
npm ci --omit=dev
npm run build
- name: Add server to known_hosts
run: |
mkdir -p ~/.ssh
ssh-keyscan -p 22 your-server >> ~/.ssh/known_hosts
- name: Upload files with rsync
env:
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
run: |
eval "$(ssh-agent -s)"
ssh-add - <<< "$SSH_PRIVATE_KEY"
rsync -az --delete
-e "ssh -o StrictHostKeyChecking=yes -p 22"
--exclude ".git"
./ deploy@your-server:/var/www/myapp/releases/${{ steps.set_ts.outputs.ts }}/
- name: Run remote deploy script
env:
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
run: |
eval "$(ssh-agent -s)"
ssh-add - <<< "$SSH_PRIVATE_KEY"
ssh deploy@your-server "TIMESTAMP=${{ steps.set_ts.outputs.ts }} bash -s" < ./scripts/remote_deploy.sh
I keep the remote_deploy.sh script in the repo under scripts/ so I can version it alongside the app. And yes, I’ve made typos that forced an emergency edit mid‑deploy. Versioning the script saved me more than once.
If you’re new to Actions, the official GitHub Actions documentation is great for sanity checks when something looks odd in your logs.
GitLab CI: A Parallel Path With Familiar Moves
GitLab CI has a similar rhythm. You define a deploy job that runs on pushes or tags, inject an SSH key, rsync the release, then call the remote script. Here’s a simple .gitlab-ci.yml to get you moving:
stages:
- build
- deploy
variables:
GIT_STRATEGY: fetch
build:
stage: build
image: node:20-alpine
script:
- npm ci --omit=dev
- npm run build
artifacts:
paths:
- dist/
- package.json
- package-lock.json
production-deploy:
stage: deploy
image: alpine:latest
only:
- main
before_script:
- apk add --no-cache openssh-client rsync bash
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d 'r' | ssh-add -
- ssh-keyscan -p 22 your-server >> ~/.ssh/known_hosts
- export TIMESTAMP=$(date +%Y%m%d%H%M%S)
script:
- rsync -az --delete -e "ssh -p 22" --exclude ".git" ./ deploy@your-server:/var/www/myapp/releases/$TIMESTAMP/
- ssh deploy@your-server "TIMESTAMP=$TIMESTAMP bash -s" < ./scripts/remote_deploy.sh
Secrets live in GitLab’s CI/CD variables. Keep your private key read‑only, scoped to the project, and preferably protected for main or tags. If you’re coming from GitHub, the GitLab CI documentation maps the pieces well.
Health Checks, Migrations, and the “If Something Feels Risky” Button
Every zero‑downtime story has a chapter about health checks. Before switching the symlink, I like to curl a local endpoint that does a quick self‑assessment—DB connection, cache access, maybe a trivial read/write test depending on the app. If the health check fails, I abort the switch. Better to leave the old version running than to promote a half‑healthy build.
For migrations, here’s my no‑drama approach: make them backward‑compatible or defer them. Add columns and tables first; deploy code that can write to the new structure while still reading the old; then follow up with a second deploy that drops old columns. Yes, it’s slower. But the number of 3 a.m. Slack pings it prevents? Worth it.
If you’re deploying a PHP/Laravel app and want the whole production tune‑up playbook—process managers, OPcache, Horizon and queues—check out the Laravel production tune‑up I do on every server. It pairs nicely with this release strategy, especially when you want queue workers to roll forward without dropping jobs.
Rollbacks: The Five‑Minute Seatbelt
One of my clients once shipped a change that looked perfect in staging but went sideways in production due to a quirky data edge case. No shame in that—it happens to all of us. The reason we didn’t break a sweat was the rollback: we simply pointed current back to the previous release and reloaded. Done.
I usually keep five releases on the server. Here’s a tiny rollback utility I like to stash as scripts/rollback.sh:
#!/usr/bin/env bash
set -euo pipefail
APP_DIR="/var/www/myapp"
RELEASES_DIR="$APP_DIR/releases"
CURRENT_LINK="$APP_DIR/current"
SERVICE="myapp.service"
releases=( $(ls -1dt "$RELEASES_DIR"/*) )
if [ ${#releases[@]} -lt 2 ]; then
echo "Not enough releases to roll back"; exit 1
fi
current_target=$(readlink -f "$CURRENT_LINK")
for r in "${releases[@]}"; do
if [ "$r" != "$current_target" ]; then
echo "Rolling back to: $r"
ln -sfn "$r" "$CURRENT_LINK"
sudo systemctl reload "$SERVICE" || sudo systemctl restart "$SERVICE"
exit 0
fi
done
echo "No previous release found to roll back to"; exit 1
Rollbacks are invisible if you’ve kept your DB migrations backward‑compatible. If you didn’t, the rollback might not be happy. That’s why I try hard to make schema changes tolerant for a release or two.
Permissions, Ownership, and Other Gotchas
A classic “why is this failing only on the server?” moment often comes down to permissions. Keep ownership consistent—typically deploy:deploy—and be mindful when your web server user (often www-data) needs write access to uploaded files. I lean on the shared directory for anything that needs persistent writes and then symlink it into each release. The app itself, once built, can usually be read‑only.
Another gotcha is file watchers and temporary files—especially for Node or SPA builds. Exclude node_modules if you’re building in CI and shipping a dist folder. If you’re building on the server, cache dependencies in the shared directory and symlink them in to avoid reinstalling every time.
Finally, be mindful of the web server config. If you point Nginx to current/public for one app and current for another, hell hath no fury like a forgotten trailing slash. I’ve learned to double‑check the server blocks before the first deploy.
Security and Secrets: Keep the Keys Where They Belong
Security doesn’t have to be complicated. A few guardrails go a long way:
First, use a dedicated deploy user with limited privileges. Give it exactly what it needs—nothing more. Second, keep the private key in CI secrets, and add the server to known_hosts in the pipeline before SSH. Third, store runtime secrets in shared/.env on the server, not in the repo or CI logs. Your deploy pipeline should ship code, not secrets.
If you want a friendly tour of adding a WAF layer and taming noisy bots while you’re at it, I wrote about my approach in the layered shield I trust with Cloudflare, ModSecurity, and Fail2ban. It pairs well once you’ve got a calm deploy story.
Graceful Reloads: Making systemd and Your App Shake Hands
Most apps don’t love being killed mid‑request. If yours can handle a graceful reload—reloading config and code without dropping connections—wire that up via ExecReload and a signal like SIGHUP. If not, do a quick restart right after switching the symlink and let the web server handle any lingering requests for a moment.
On PHP‑FPM, reload is your friend. On Node, you might wrap the process with a manager that knows how to swap workers. For some stacks, a fast restart is effectively the same as a reload, especially when the app initializes quickly and your upstream (Nginx) is patient.
Observability: The Canary That Sings Before Users Do
The first time I watched a deploy trigger a spike in error logs—before users noticed—I became a monitoring believer. After wiring up zero‑downtime deploys, make sure you can actually see what happens during and after. If you want a simple place to start, have a look at the playbook I use to keep a VPS calm with Prometheus and Grafana or the friendlier intro with Uptime Kuma in VPS monitoring and alerts without tears. A couple of smart alerts will tell you if error rates climb or latency creeps after a release.
HTTP/2, HTTP/3, and Other Post‑Deploy Polishing
Once your pipeline is boring (in the best way), you can afford to tune the rest of the stack without risking drama. I like enabling HTTP/2 and HTTP/3 in front of apps because it helps with perceived speed and connection efficiency—especially for asset‑heavy frontends. If you want the full walkthrough, I’ve shared my step‑by‑step in the end‑to‑end playbook for enabling HTTP/2 and HTTP/3 with Nginx and Cloudflare.
Backups and Safety Nets: Sleep Better, Deploy Happier
When you’re confident that you can roll forward or back and that your data is safe, deploys lose their sting. I always pair release automation with offsite backups. If your VPS vanished tomorrow, could you restore? If not, treat it as your next task. For a practical, low‑drama setup with versioning and encryption, take a look at my friendly guide to offsite backups using Restic or Borg to S3‑compatible storage. It’s the quiet hero of many recoveries.
Extra Notes and Little Tricks from Real Projects
Here are a few tidy habits that make life easier:
Use concurrency in CI so only one deploy for a given environment runs at a time. It’s a tiny line in YAML that prevents a surprising amount of chaos. Version your deploy scripts alongside the app to track which release used which logic. And keep your cleanup step ruthless—five releases is usually plenty unless you do deep forensic debugging.
If you’re setting this up for Laravel specifically, you might enjoy my longer story about queues, Horizon, and rolling releases in the no‑drama Laravel on VPS playbook. It shows how all these pieces line up for a smooth app lifecycle, not just code pushes.
The Minimal, Repeatable Checklist
Let’s compress this into the mental checklist I run every time:
First, the server has a deploy user and the /var/www/myapp structure with releases, shared, and a current symlink. The web server points to current. A systemd unit is ready to reload or restart. Second, CI knows how to build and rsync to a timestamped release directory. Third, the remote script links shared files, installs dependencies, runs optional migrations, health checks, switches the symlink, reloads the service, and cleans up old releases. Fourth, monitoring watches for error spikes, and backups stand by if the unthinkable happens.
Once you’ve done this once or twice, it becomes muscle memory. And that’s when deploys stop being a rollercoaster and become just another calm step in your day.
Wrap‑Up: A Calm Pipeline You’ll Actually Trust
I still remember the first time I shipped a Friday evening fix without holding my breath. The logs were quiet, the status page stayed green, and my coffee was still warm when I closed the laptop. That’s the feeling I want for you: a predictable pipeline that gets out of your way.
Zero‑downtime CI/CD to a VPS isn’t rocket science. It’s a pattern: rsync for speed, symlinked releases for atomic switches, and systemd for steady supervision. Whether you use GitHub Actions or GitLab CI, the flow barely changes. Ship the code, prepare the release offstage, health check it, then flip the symlink and reload. If something feels off, roll back in seconds and regroup.
If you remember nothing else, remember this: favor small, backward‑compatible changes; keep secrets on the server; and invest in monitoring and backups before you need them. The rest is just muscle memory. Hope this was helpful! If you try this flow and get stuck, reach out—I’ve probably stumbled over the same rock and I’m happy to help you step around it next time.
Further reading that pairs well with this guide:
- GitHub Actions documentation for workflow syntax and runners
- GitLab CI documentation for pipeline configuration and variables
- systemd service reference for graceful reloads and unit options
