Technology

Zero‑Downtime Deployments to a VPS with GitHub Actions for PHP and Node.js

Deploying directly to a live VPS over SSH and hoping for the best is how many projects start. A quick git pull, a manual composer install or npm install, maybe a service restart, and you are done – until a deployment hangs in the middle, users see errors, and rolling back becomes a guessing game. Zero‑downtime deployments solve this by treating your VPS like a mini production platform: every release is built, tested, shipped and activated in a predictable, reversible way. In this article, we will walk through how to build that pipeline using GitHub Actions and a VPS running PHP or Node.js. We will design a simple folder structure for releases, use atomic symlink switches, wire in systemd or PM2, and tie everything together with GitHub Actions workflows. The goal is clear: push to your main branch and let automation roll out safe, fast, zero‑downtime releases to your dchost VPS.

What Zero‑Downtime Deployment Really Means on a VPS

Definition in practical terms

Zero‑downtime deployment means users never see a broken or half‑updated version of your application while you deploy a new release. On a single VPS this usually comes down to three rules:

  • Never modify the currently running code in place.
  • Prepare the new release in a separate directory, fully ready to serve traffic.
  • Switch traffic to the new release with one fast, atomic action and keep a rollback path.

Instead of copying files over the top of your app, you deploy into timestamped release directories and point a current symlink to the active one. Swapping that symlink is virtually instantaneous, so the window where things can go wrong is extremely small.

Why it matters for PHP and Node.js apps

On a VPS hosting PHP (for example Laravel, Symfony, WordPress) or Node.js (REST APIs, real‑time apps, dashboards), brief outages during deploys can be surprisingly costly:

  • Abandoned carts on e‑commerce checkouts that error out mid‑request.
  • Broken API clients that cache a 500 response or treat a short outage as a hard failure.
  • Lost trust when admin panels or dashboards are frequently unavailable during work hours.

Zero‑downtime deployment patterns remove these spikes in errors. Your PHP‑FPM pool or Node.js process continues serving the old release until the instant you switch symlinks or reload processes. If you want to see a deeper dive into these techniques, we already use the same patterns in our detailed rsync + symlink + systemd CI/CD playbook for VPS servers.

Core Building Blocks of Zero‑Downtime on a VPS

Directory layout: releases, shared and current

A simple, battle‑tested directory structure under /var/www/myapp works for both PHP and Node.js:

  • releases/ – every deployment gets its own timestamped directory, e.g. 2025-12-29_120000/
  • shared/ – persistent data shared across releases (e.g. storage/, uploads/, .env)
  • current – a symlink pointing to the active release, e.g. current -> releases/2025-12-29_120000

Deployments create a new directory under releases/, sync code into it, run build steps (composer install, npm ci && npm run build), then update the symlink.

Atomic symlink switch

On Linux, changing what a symlink points to is atomic from the perspective of other processes. Nginx, PHP‑FPM and Node.js read the new path on the next request or restart, without ever seeing a half‑copied directory. The typical activation step is:

ln -sfn /var/www/myapp/releases/2025-12-29_120000 /var/www/myapp/current

The -sfn flags ensure the old symlink is replaced in one go. This pattern is one of the reasons we like VPS‑based workflows so much: you get full control over the filesystem and can implement robust release management with a handful of commands.

Process management: systemd and PM2

For PHP web frontends, Nginx or Apache talk to PHP‑FPM, which is always running and does not need restarts for every deploy. You only restart or reload PHP‑FPM when you upgrade PHP or change its configuration. Background workers (e.g. Laravel queues) should be supervised by systemd so you can reload them cleanly after a deploy.

For Node.js, you have two common options:

  • systemd unit that runs node server.js in the current directory, and you run systemctl restart myapp or systemctl reload myapp at the end of the deployment.
  • PM2 process manager, using pm2 reload for zero‑downtime restarts.

We explain these patterns step‑by‑step for real Node.js projects in how to host Node.js in production without drama.

Nginx as a stable, long‑lived entry point

On a typical dchost VPS, Nginx is the public entry point:

  • For PHP apps, Nginx serves static assets and forwards dynamic requests to PHP‑FPM.
  • For Node.js apps, Nginx acts as a reverse proxy to a Node.js backend running on localhost:3000 or similar.

Your GitHub Actions deployments never touch Nginx’s main configuration, only the code Nginx points to. As long as Nginx stays up, switching releases under /var/www/myapp/current does not interrupt active connections.

Preparing Your VPS for GitHub Actions Deployments

Create a dedicated deploy user and SSH key

Start by creating a non‑root user on your VPS to handle deployments:

  1. Create a user, e.g. deploy, and give it ownership of /var/www/myapp.
  2. Add the user to a limited sudo group if it needs to run systemctl (only for specific commands).
  3. Generate an SSH keypair and add the public key to ~deploy/.ssh/authorized_keys.

If you need a refresher on safe SSH setups, we have a full checklist in our article on VPS security hardening with SSH configuration, Fail2ban and disabling direct root access.

Set up the application folders

On the VPS, prepare the layout once:

sudo mkdir -p /var/www/myapp/{releases,shared}
sudo chown -R deploy:deploy /var/www/myapp

Then create shared resources, for example for a Laravel app:

  • /var/www/myapp/shared/storage
  • /var/www/myapp/shared/.env

Each new release will symlink these shared paths so logs, file uploads and configuration persist across deployments.

systemd service for PHP workers or Node.js

For PHP queue workers (Laravel, Symfony Messenger, etc.), a typical systemd unit might look like:

[Unit]
Description=Laravel Queue Worker
After=network.target

[Service]
User=deploy
WorkingDirectory=/var/www/myapp/current
ExecStart=/usr/bin/php artisan queue:work --sleep=3 --tries=3
Restart=always

[Install]
WantedBy=multi-user.target

For a Node.js API:

[Unit]
Description=My Node.js API
After=network.target

[Service]
User=deploy
WorkingDirectory=/var/www/myapp/current
ExecStart=/usr/bin/node server.js
Restart=always
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target

Notice that both use WorkingDirectory=/var/www/myapp/current. When we switch the symlink, a restart or reload makes them run the new codebase without touching the unit file.

GitHub Actions Basics for VPS Deployments

What GitHub Actions does in this setup

GitHub Actions is GitHub’s built‑in CI/CD service. For zero‑downtime VPS deployments, we use it to:

  • Trigger on push to specific branches (e.g. main for production, develop for staging).
  • Checkout the repository code.
  • Install dependencies and run tests or linters.
  • Build assets (for SPAs or Tailwind, for example).
  • Sync the prepared release to the VPS via rsync over SSH.
  • Run a remote script to update the symlink and restart services.

This keeps your VPS clean and predictable: all heavy build work happens on GitHub’s runners, your dchost VPS only receives ready‑to‑run artifacts.

Storing VPS credentials as GitHub Secrets

Never hard‑code IP addresses, usernames or private keys in your repository. Instead, define them as Actions secrets in your GitHub project:

  • VPS_HOST – the IP or hostname of your dchost VPS
  • VPS_USER – typically deploy
  • VPS_SSH_KEY – the private SSH key matching the public key on the server
  • VPS_APP_PATH – e.g. /var/www/myapp

GitHub Actions runners can then load these values at runtime without exposing them in logs or code.

A generic deploy job structure

Here is a simplified, language‑agnostic GitHub Actions job that prepares a release and syncs it to the VPS:

name: Deploy

on:
  push:
    branches: [ "main" ]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up SSH key
        run: |
          mkdir -p ~/.ssh
          echo "${{ secrets.VPS_SSH_KEY }}" > ~/.ssh/id_rsa
          chmod 600 ~/.ssh/id_rsa
          ssh-keyscan -H ${{ secrets.VPS_HOST }} >> ~/.ssh/known_hosts

      - name: Install dependencies and build
        run: |
          # this section will differ for PHP vs Node.js
          echo "Run composer/npm here"

      - name: Create release archive
        run: |
          RELEASE=$(date +"%Y-%m-%d_%H%M%S")
          echo "RELEASE=$RELEASE" >> $GITHUB_ENV
          tar czf myapp-$RELEASE.tgz .

      - name: Upload release to VPS
        run: |
          RELEASE=${{ env.RELEASE }}
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "mkdir -p ${{ secrets.VPS_APP_PATH }}/releases/$RELEASE"
          scp myapp-$RELEASE.tgz 
            ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }}:${{ secrets.VPS_APP_PATH }}/releases/$RELEASE/
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "cd ${{ secrets.VPS_APP_PATH }}/releases/$RELEASE && tar xzf myapp-$RELEASE.tgz && rm myapp-$RELEASE.tgz"

      - name: Activate release on VPS
        run: |
          RELEASE=${{ env.RELEASE }}
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "cd ${{ secrets.VPS_APP_PATH }} && ./bin/activate_release.sh $RELEASE"

The last step calls a script on the VPS (bin/activate_release.sh) that we will write once and reuse for both PHP and Node.js apps.

Zero‑Downtime Workflow for PHP (Laravel / Generic PHP)

Build and deploy flow

For a modern PHP app (e.g. Laravel), a typical zero‑downtime deployment pipeline looks like this:

  1. Developer pushes to main branch.
  2. GitHub Actions checks out code and installs Composer dependencies (without dev packages).
  3. Front‑end assets are built with npm ci && npm run build if applicable.
  4. The build output is archived (or rsynced) to a new release directory on the VPS.
  5. On the VPS, shared directories are symlinked into the new release (e.g. storage, .env).
  6. Database migrations run in a backward‑compatible way.
  7. current symlink is switched to the new release.
  8. PHP queue workers are gracefully restarted via systemd.

If you are interested in a full Laravel‑specific runbook, we already describe it in detail in deploying Laravel on a VPS with truly zero‑downtime releases.

GitHub Actions example for a Laravel app

Here is a more concrete PHP‑focused deploy.yml snippet:

name: Deploy Laravel to VPS

on:
  push:
    branches: [ "main" ]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Set up PHP
        uses: shivammathur/setup-php@v2
        with:
          php-version: '8.2'
          extensions: mbstring, intl, pdo_mysql

      - name: Install PHP dependencies
        run: |
          composer install --no-dev --prefer-dist --no-interaction --optimize-autoloader

      - name: Build frontend assets
        run: |
          npm ci
          npm run build

      - name: Prepare release package
        run: |
          RELEASE=$(date +"%Y-%m-%d_%H%M%S")
          echo "RELEASE=$RELEASE" >> $GITHUB_ENV
          tar czf laravel-$RELEASE.tgz . --exclude="storage" --exclude="node_modules" --exclude="tests"

      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh
          echo "${{ secrets.VPS_SSH_KEY }}" > ~/.ssh/id_rsa
          chmod 600 ~/.ssh/id_rsa
          ssh-keyscan -H ${{ secrets.VPS_HOST }} >> ~/.ssh/known_hosts

      - name: Upload release to VPS
        run: |
          RELEASE=${{ env.RELEASE }}
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "mkdir -p ${{ secrets.VPS_APP_PATH }}/releases/$RELEASE"
          scp laravel-$RELEASE.tgz 
            ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }}:${{ secrets.VPS_APP_PATH }}/releases/$RELEASE/
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "cd ${{ secrets.VPS_APP_PATH }}/releases/$RELEASE && tar xzf laravel-$RELEASE.tgz && rm laravel-$RELEASE.tgz"

      - name: Activate release
        run: |
          RELEASE=${{ env.RELEASE }}
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "cd ${{ secrets.VPS_APP_PATH }} && ./bin/activate_laravel_release.sh $RELEASE"

The activation script on the VPS

An example /var/www/myapp/bin/activate_laravel_release.sh:

#!/usr/bin/env bash
set -euo pipefail

APP_PATH=/var/www/myapp
RELEASE=$1

cd "$APP_PATH"

# Link shared resources
ln -sfn "$APP_PATH/shared/.env" "releases/$RELEASE/.env"
rm -rf "releases/$RELEASE/storage"
ln -sfn "$APP_PATH/shared/storage" "releases/$RELEASE/storage"

# Run migrations (ensure they are backwards compatible)
cd "releases/$RELEASE"
php artisan migrate --force
php artisan config:cache
php artisan route:cache
php artisan view:cache

# Atomic switch
ln -sfn "$APP_PATH/releases/$RELEASE" "$APP_PATH/current"

# Reload queue workers
sudo systemctl restart laravel-queue.service

This script assumes your migrations are safe to run while the old release is still serving traffic. For complex schema changes (dropping columns, renaming fields), you should follow our guide to Zero‑Downtime MySQL schema migrations to avoid locking tables during deploys.

Zero‑Downtime Workflow for Node.js Apps

Deployment flow for Node.js APIs and SPAs

For Node.js, the structure is very similar but the runtime behaviour is different:

  1. GitHub Actions installs Node.js and dependencies with npm ci or yarn install --frozen-lockfile.
  2. It builds production bundles (npm run build).
  3. The built app is packaged and uploaded to a timestamped release directory on the VPS.
  4. On the VPS, environment configuration and uploads are symlinked from shared.
  5. Node.js process is reloaded via systemd or PM2 in a way that keeps active connections alive where possible.

GitHub Actions example for a Node.js app

Here is a Node‑focused workflow that still uses the same general pattern:

name: Deploy Node.js App to VPS

on:
  push:
    branches: [ "main" ]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v4

      - name: Use Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install dependencies
        run: npm ci

      - name: Build app
        run: npm run build

      - name: Prepare release package
        run: |
          RELEASE=$(date +"%Y-%m-%d_%H%M%S")
          echo "RELEASE=$RELEASE" >> $GITHUB_ENV
          tar czf nodeapp-$RELEASE.tgz . --exclude="node_modules" --exclude="tests"

      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh
          echo "${{ secrets.VPS_SSH_KEY }}" > ~/.ssh/id_rsa
          chmod 600 ~/.ssh/id_rsa
          ssh-keyscan -H ${{ secrets.VPS_HOST }} >> ~/.ssh/known_hosts

      - name: Upload release to VPS
        run: |
          RELEASE=${{ env.RELEASE }}
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "mkdir -p ${{ secrets.VPS_APP_PATH }}/releases/$RELEASE"
          scp nodeapp-$RELEASE.tgz 
            ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }}:${{ secrets.VPS_APP_PATH }}/releases/$RELEASE/
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "cd ${{ secrets.VPS_APP_PATH }}/releases/$RELEASE && tar xzf nodeapp-$RELEASE.tgz && rm nodeapp-$RELEASE.tgz && npm ci --omit=dev"

      - name: Activate release
        run: |
          RELEASE=${{ env.RELEASE }}
          ssh ${{ secrets.VPS_USER }}@${{ secrets.VPS_HOST }} 
            "cd ${{ secrets.VPS_APP_PATH }} && ./bin/activate_node_release.sh $RELEASE"

Node.js activation script with systemd

A simple activate_node_release.sh could look like this:

#!/usr/bin/env bash
set -euo pipefail

APP_PATH=/var/www/myapp
RELEASE=$1

cd "$APP_PATH"

# Link shared env and uploads
ln -sfn "$APP_PATH/shared/.env" "releases/$RELEASE/.env"
rm -rf "releases/$RELEASE/uploads"
ln -sfn "$APP_PATH/shared/uploads" "releases/$RELEASE/uploads"

# Optional: run database migrations here (with the same care as for PHP)

# Atomic switch
ln -sfn "$APP_PATH/releases/$RELEASE" "$APP_PATH/current"

# Restart Node.js service (short blip, acceptable for many APIs)
sudo systemctl restart nodeapp.service

If you need true zero‑blip reloads for long‑lived WebSocket connections or real‑time dashboards, PM2’s reload mode or a rolling restart pattern can help. We cover these options in more depth in our Node.js production deployment playbook.

Handling Database and Breaking Changes Safely

Backward‑compatible database migrations

Zero‑downtime deployments are only truly zero‑downtime if your database schema changes do not break the currently running code. Some practical rules:

  • Add, do not remove in the first step: add new columns or tables; do not drop old ones yet.
  • Deploy in two phases: first deploy code that works with both old and new schema, then clean up in a later deploy.
  • Avoid long‑running locks: for huge tables, use online migration tools and carefully planned indexes.

Our article on Zero‑Downtime MySQL schema migrations shows how to use tools like gh‑ost or pt-online-schema-change to avoid blocking production traffic while tables are altered.

Feature flags and configuration changes

For larger teams or high‑risk changes, you can decouple deploys from feature releases:

  • Introduce feature flags (in config or database) so new code paths can be turned on gradually after a stable deploy.
  • Keep configuration backward‑compatible: avoid renaming keys in .env files during the same deploy; instead, read both old and new names temporarily.
  • Version your APIs: if you must break clients, serve /v1 and /v2 concurrently for a while.

This approach reduces pressure on each individual deployment. GitHub Actions only has to ship code; product decisions about when to enable changes can happen later.

Observability, Rollbacks and Hardening the Pipeline

Monitoring and alerts around deployments

Once you automate deployments, visibility becomes even more important. At minimum, you should monitor:

  • Uptime and HTTP status codes (spikes in 5xx after a deploy are a signal to rollback).
  • CPU, RAM and disk usage on your VPS.
  • Application logs for uncaught exceptions or connection errors.

If you want a structured approach, see our guide on VPS monitoring and alerts with Prometheus, Grafana and Uptime Kuma, which fits nicely next to a GitHub Actions deployment pipeline.

Designing a fast rollback

The same symlink pattern that enables zero‑downtime deploys also gives you instant rollbacks. Because you keep several older releases in releases/, a rollback script can simply point current back to the previous one:

#!/usr/bin/env bash
set -euo pipefail

APP_PATH=/var/www/myapp

cd "$APP_PATH/releases"

# list releases sorted by name (timestamp) and get the two newest
LATEST=$(ls -1 | sort | tail -n 1)
PREVIOUS=$(ls -1 | sort | tail -n 2 | head -n 1)

ln -sfn "$APP_PATH/releases/$PREVIOUS" "$APP_PATH/current"

# restart services if needed
sudo systemctl restart laravel-queue.service || true
sudo systemctl restart nodeapp.service || true

echo "Rolled back from $LATEST to $PREVIOUS"

Because you are not deleting anything during deployment, rollbacks are just another symlink switch. This is a huge operational advantage over in‑place updates.

Security and reliability considerations

To keep your GitHub Actions to VPS pipeline safe and reliable:

  • Use deploy‑only SSH keys with no interactive shell and limited sudo permissions.
  • Rotate keys periodically and update GitHub Secrets.
  • Run tests (unit, integration, even smoke tests) before the deploy step; fail fast if something is wrong.
  • Start with deploying to a staging VPS before production, using the same workflow but different secrets and host.

If you are interested in alternative deployment mechanisms (cPanel Git integration, Plesk, bare VPS), we cover them more broadly in our guide to Git deployment workflows on cPanel, Plesk and VPS.

Bringing It All Together on Your dchost VPS

Zero‑downtime deployments to a VPS with GitHub Actions are not reserved for huge teams or complex container setups. With a straightforward folder layout (releases, shared, current), a couple of small shell scripts and a GitHub Actions workflow, you can give your PHP and Node.js apps the same predictable, reversible deployment experience as much larger platforms. Your dchost VPS becomes a stable, scriptable target: Nginx stays up, PHP‑FPM or Node.js run under systemd, and every push to your main branch can safely roll out a new release without disturbing users.

From here, you can extend the pipeline with staging environments, canary rollouts, database migration automation and richer monitoring dashboards. We already use these patterns daily across many customer projects, and they scale well from a single small VPS up to more complex multi‑server setups. If you are running your applications on dchost.com VPS, dedicated server or colocation infrastructure, you have all the control you need to implement this workflow today. Start by setting up the directory structure and activation scripts on your server, then wire in a simple GitHub Actions workflow. Once your first zero‑downtime deployment lands smoothly, you will not want to go back to manual uploads again.

Frequently Asked Questions

No, you do not need Docker or Kubernetes to achieve zero‑downtime deployments on a VPS. The core idea is to avoid modifying live code in place and instead deploy to versioned release directories, then switch a symlink like current to the new release atomically. Combined with Nginx, PHP‑FPM and systemd (or PM2 for Node.js), this gives you a robust deployment story without introducing containers. Docker and Kubernetes can add more isolation and scaling options, but for many small to medium PHP and Node.js projects on a single VPS, a GitHub Actions + rsync/symlink approach is simpler, cheaper and easier to maintain.

Store all sensitive data as encrypted GitHub Secrets instead of committing them to your repository. At minimum, define secrets for your VPS host, deploy user, SSH private key and application path. In your workflow, write the private key to ~/.ssh/id_rsa with strict 600 permissions and add the server to known_hosts using ssh-keyscan. Avoid echoing secrets in logs, and never print the contents of your private key or .env files. For higher security, you can also create a deploy-only SSH key restricted to specific commands via authorized_keys options and rotate this key regularly, updating the corresponding GitHub Secret each time.

Yes, you can reuse the same workflow file for staging and production by parameterising it with different secrets and branch filters. A common pattern is to trigger staging deploys on pushes to a develop or staging branch, using VPS_HOST_STAGING and VPS_APP_PATH_STAGING secrets, while production deploys trigger on main and use VPS_HOST_PROD and VPS_APP_PATH_PROD. Inside the workflow, you select which host and path to use based on the branch or use separate jobs for staging and production. This keeps your deployment logic consistent across environments while still enforcing clear separation of credentials and infrastructure.

Treat database migrations as part of your deployment design, not an afterthought. Aim for backward‑compatible changes: first add new columns or tables while leaving old ones in place, then update application code to use the new schema, and only in a later release remove deprecated fields. For large tables, avoid long‑running locking ALTERs that block traffic; instead, use online schema change tools and pre‑built indexes. Run migrations from your activation script on the VPS, after the new release is synced but before you switch the symlink. For more advanced MySQL strategies, see our dedicated guide on zero‑downtime schema migrations with gh-ost and pt-online-schema-change.

If you follow a releases/current/shared pattern, a failed deployment usually does not affect users at all. The workflow creates a new directory under releases, copies files there and only at the very end updates the current symlink. If a step fails before that final switch, the old release remains active and users continue to see the previous version. You can fix the problem, push a new commit and let a fresh deployment run. It is also good practice to keep a rollback script on the VPS that points current back to the previous release and restarts services, so even if a bad release is activated you can revert quickly with a single command.