{"id":1483,"date":"2025-11-07T14:17:34","date_gmt":"2025-11-07T11:17:34","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/zero%e2%80%91downtime-ci-cd-to-a-vps-the-friendly-rsync-symlink-systemd-playbook-i-keep-reusing\/"},"modified":"2025-11-07T14:17:34","modified_gmt":"2025-11-07T11:17:34","slug":"zero%e2%80%91downtime-ci-cd-to-a-vps-the-friendly-rsync-symlink-systemd-playbook-i-keep-reusing","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/zero%e2%80%91downtime-ci-cd-to-a-vps-the-friendly-rsync-symlink-systemd-playbook-i-keep-reusing\/","title":{"rendered":"Zero\u2011Downtime CI\/CD to a VPS: The Friendly rsync + Symlink + systemd Playbook I Keep Reusing"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>So there I was on a Friday evening, staring at a progress bar that felt like it was holding my entire weekend hostage. You know that sinking feeling when a deploy hits a snag and the site returns a gateway timeout? I\u2019ve been there more times than I\u2019d like to admit. The fix that finally calmed my deployments wasn\u2019t a shiny new tool or a fancy platform. It was a humble trio working together: rsync to move files efficiently, symlinked releases to swap versions atomically, and systemd to keep my app neatly supervised. Tie that into GitHub Actions or GitLab CI, and you\u2019ve got a zero\u2011downtime pipeline that just\u2026 works.<\/p>\n<p>Ever had that moment when you hit \u201cdeploy\u201d and immediately watch your uptime graph dive? I certainly have, and it\u2019s what nudged me into building a process that\u2019s both simple and forgiving. In this guide, we\u2019re going to set up a zero\u2011downtime CI\/CD flow to a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a> using GitHub Actions or GitLab CI, rsync for file sync, symlinked releases for atomic switches, and systemd services for process management. I\u2019ll share the exact structure I use, the scripts that make it tick, and the little safety checks that keep things calm. We\u2019ll also talk about rollbacks (because we\u2019re not superheroes) and the gotchas that inevitably show up.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#The_Mental_Model_Why_ZeroDowntime_Works_Like_a_Costume_Change\"><span class=\"toc_number toc_depth_1\">1<\/span> The Mental Model: Why Zero\u2011Downtime Works Like a Costume Change<\/a><\/li><li><a href=\"#Preparing_the_VPS_One_Calm_Home_for_All_Your_Releases\"><span class=\"toc_number toc_depth_1\">2<\/span> Preparing the VPS: One Calm Home for All Your Releases<\/a><ul><li><a href=\"#Create_the_structure_and_a_deploy_user\"><span class=\"toc_number toc_depth_2\">2.1<\/span> Create the structure and a deploy user<\/a><\/li><li><a href=\"#Systemd_service_the_quiet_supervisor\"><span class=\"toc_number toc_depth_2\">2.2<\/span> Systemd service: the quiet supervisor<\/a><\/li><li><a href=\"#Web_server_routing_to_the_current_release\"><span class=\"toc_number toc_depth_2\">2.3<\/span> Web server routing to the current release<\/a><\/li><\/ul><\/li><li><a href=\"#The_Release_Script_rsync_Build_Health_Check_Atomic_Switch\"><span class=\"toc_number toc_depth_1\">3<\/span> The Release Script: rsync, Build, Health Check, Atomic Switch<\/a><\/li><li><a href=\"#rsync_From_CI_Getting_Files_to_the_Server_Fast_and_Safely\"><span class=\"toc_number toc_depth_1\">4<\/span> rsync From CI: Getting Files to the Server Fast (and Safely)<\/a><\/li><li><a href=\"#GitHub_Actions_A_Friendly_Workflow_That_Just_Works\"><span class=\"toc_number toc_depth_1\">5<\/span> GitHub Actions: A Friendly Workflow That Just Works<\/a><\/li><li><a href=\"#GitLab_CI_A_Parallel_Path_With_Familiar_Moves\"><span class=\"toc_number toc_depth_1\">6<\/span> GitLab CI: A Parallel Path With Familiar Moves<\/a><\/li><li><a href=\"#Health_Checks_Migrations_and_the_If_Something_Feels_Risky_Button\"><span class=\"toc_number toc_depth_1\">7<\/span> Health Checks, Migrations, and the \u201cIf Something Feels Risky\u201d Button<\/a><\/li><li><a href=\"#Rollbacks_The_FiveMinute_Seatbelt\"><span class=\"toc_number toc_depth_1\">8<\/span> Rollbacks: The Five\u2011Minute Seatbelt<\/a><\/li><li><a href=\"#Permissions_Ownership_and_Other_Gotchas\"><span class=\"toc_number toc_depth_1\">9<\/span> Permissions, Ownership, and Other Gotchas<\/a><\/li><li><a href=\"#Security_and_Secrets_Keep_the_Keys_Where_They_Belong\"><span class=\"toc_number toc_depth_1\">10<\/span> Security and Secrets: Keep the Keys Where They Belong<\/a><\/li><li><a href=\"#Graceful_Reloads_Making_systemd_and_Your_App_Shake_Hands\"><span class=\"toc_number toc_depth_1\">11<\/span> Graceful Reloads: Making systemd and Your App Shake Hands<\/a><\/li><li><a href=\"#Observability_The_Canary_That_Sings_Before_Users_Do\"><span class=\"toc_number toc_depth_1\">12<\/span> Observability: The Canary That Sings Before Users Do<\/a><\/li><li><a href=\"#HTTP2_HTTP3_and_Other_PostDeploy_Polishing\"><span class=\"toc_number toc_depth_1\">13<\/span> HTTP\/2, HTTP\/3, and Other Post\u2011Deploy Polishing<\/a><\/li><li><a href=\"#Backups_and_Safety_Nets_Sleep_Better_Deploy_Happier\"><span class=\"toc_number toc_depth_1\">14<\/span> Backups and Safety Nets: Sleep Better, Deploy Happier<\/a><\/li><li><a href=\"#Extra_Notes_and_Little_Tricks_from_Real_Projects\"><span class=\"toc_number toc_depth_1\">15<\/span> Extra Notes and Little Tricks from Real Projects<\/a><\/li><li><a href=\"#The_Minimal_Repeatable_Checklist\"><span class=\"toc_number toc_depth_1\">16<\/span> The Minimal, Repeatable Checklist<\/a><\/li><li><a href=\"#WrapUp_A_Calm_Pipeline_Youll_Actually_Trust\"><span class=\"toc_number toc_depth_1\">17<\/span> Wrap\u2011Up: A Calm Pipeline You\u2019ll Actually Trust<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"The_Mental_Model_Why_ZeroDowntime_Works_Like_a_Costume_Change\">The Mental Model: Why Zero\u2011Downtime Works Like a Costume Change<\/span><\/h2>\n<p>Here\u2019s the thing: zero\u2011downtime releases aren\u2019t about making your code perfect; they\u2019re about changing versions in a way users don\u2019t notice. Think of a theater performance: the actors don\u2019t pause the play to change outfits. They step offstage, swap costumes, and walk back in like nothing happened. That\u2019s what symlinked releases give you.<\/p>\n<p>On your VPS, you\u2019ll keep multiple releases in a <strong>releases<\/strong> directory. Each release is a timestamped folder with the built code. There\u2019s also a <strong>shared<\/strong> directory for things that persist across releases\u2014uploads, environment files, caches. And then there\u2019s a <strong>current<\/strong> symlink, which points to whichever release is live. The magic move is swapping that symlink atomically once the new release is ready. It\u2019s a blink, not a pause.<\/p>\n<p>Why rsync? Because it\u2019s efficient, battle-tested, and humble. Rather than rebuilding everything on the server, we ship only the Changes That Matter\u2122 over SSH, and keep ownership and permissions tidy. And systemd? It\u2019s our stage manager. It supervises your app process, restarts it on failure, and lets us do graceful reloads when your app supports it. Together, these pieces let you deploy without users ever seeing a white screen or a 502.<\/p>\n<h2 id=\"section-2\"><span id=\"Preparing_the_VPS_One_Calm_Home_for_All_Your_Releases\">Preparing the VPS: One Calm Home for All Your Releases<\/span><\/h2>\n<h3><span id=\"Create_the_structure_and_a_deploy_user\">Create the structure and a deploy user<\/span><\/h3>\n<p>I like to keep things predictable. On the server, I\u2019ll create a dedicated user (say, <strong>deploy<\/strong>) with SSH access and a tidy folder structure:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo adduser --disabled-password --gecos &quot;&quot; deploy\nsudo mkdir -p \/var\/www\/myapp\/{releases,shared}\nsudo chown -R deploy:deploy \/var\/www\/myapp\n<\/code><\/pre>\n<p>Inside <strong>shared<\/strong>, you\u2019ll keep persistent things. For a PHP\/Laravel app, that might be <em>storage<\/em> and <em>.env<\/em>. For a Node app, maybe a <em>.env<\/em> and an <em>uploads<\/em> directory. The point is: your app can be swapped in and out while the stuff that should survive deploys stays put.<\/p>\n<h3><span id=\"Systemd_service_the_quiet_supervisor\">Systemd service: the quiet supervisor<\/span><\/h3>\n<p>Even if you\u2019re using Nginx or Caddy out front, let systemd manage your app layer. Here\u2019s a clean, generic service unit. Customize the paths and Exec commands for your stack:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">[Unit]\nDescription=MyApp Service\nAfter=network.target\n\n[Service]\nType=simple\nUser=deploy\nGroup=deploy\nWorkingDirectory=\/var\/www\/myapp\/current\n# Example for Node (adjust for your stack):\nExecStart=\/usr\/bin\/node server.js\n# If your app supports graceful reload on SIGHUP, define ExecReload\nExecReload=\/bin\/kill -HUP $MAINPID\nRestart=always\nRestartSec=2\nEnvironmentFile=-\/var\/www\/myapp\/shared\/.env\n# Optional: Keep logs in journal or redirect via stdout\/stderr\nStandardOutput=journal\nStandardError=journal\n\n[Install]\nWantedBy=multi-user.target\n<\/code><\/pre>\n<p>Enable and start it once so the unit exists, even before your first release:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo systemctl daemon-reload\nsudo systemctl enable myapp.service\nsudo systemctl start myapp.service\n<\/code><\/pre>\n<p>If you\u2019re curious about unit options and reload behavior, the <a href=\"https:\/\/www.freedesktop.org\/software\/systemd\/man\/systemd.service.html\" rel=\"nofollow noopener\" target=\"_blank\">systemd service documentation<\/a> is an excellent reference and worth bookmarking.<\/p>\n<h3><span id=\"Web_server_routing_to_the_current_release\">Web server routing to the current release<\/span><\/h3>\n<p>Point your web server root (or upstream) to <strong>\/var\/www\/myapp\/current\/public<\/strong> for PHP\/Laravel or <strong>\/var\/www\/myapp\/current<\/strong> for Node or other frameworks. The idea is to keep the web server oblivious to deploys. It just follows the <em>current<\/em> symlink.<\/p>\n<h2 id=\"section-3\"><span id=\"The_Release_Script_rsync_Build_Health_Check_Atomic_Switch\">The Release Script: rsync, Build, Health Check, Atomic Switch<\/span><\/h2>\n<p>Let\u2019s talk about the heart of it all: the deploy script that runs on the server. CI will push files over SSH with rsync and then call this script remotely. We\u2019ll assemble it piece by piece, but here\u2019s the big picture:<\/p>\n<ol>\n<li>Create a new timestamped release directory.<\/li>\n<li>rsync files into that directory.<\/li>\n<li>Link shared resources (like .env, uploads).<\/li>\n<li>Install dependencies and build (if applicable).<\/li>\n<li>Warm caches, run migrations safely (if you must).<\/li>\n<li>Health check the new release.<\/li>\n<li>Atomically switch the <em>current<\/em> symlink to the new release.<\/li>\n<li>Reload or restart the systemd service gracefully.<\/li>\n<li>Clean up old releases.<\/li>\n<\/ol>\n<p>Here\u2019s a practical script that I\u2019ve used\u2014and tweaked\u2014across many apps:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">#!\/usr\/bin\/env bash\nset -euo pipefail\n\nAPP_DIR=&quot;\/var\/www\/myapp&quot;\nRELEASES_DIR=&quot;$APP_DIR\/releases&quot;\nSHARED_DIR=&quot;$APP_DIR\/shared&quot;\nTIMESTAMP=$(date +%Y%m%d%H%M%S)\nRELEASE_DIR=&quot;$RELEASES_DIR\/$TIMESTAMP&quot;\nCURRENT_LINK=&quot;$APP_DIR\/current&quot;\nKEEP_RELEASES=5\nSERVICE=&quot;myapp.service&quot;\n\nlog() { echo &quot;[deploy] $1&quot;; }\n\nlog &quot;Creating release directory: $RELEASE_DIR&quot;\nmkdir -p &quot;$RELEASE_DIR&quot;\n\nlog &quot;Linking shared resources&quot;\n# Example shared items (adjust for your stack)\nln -sfn &quot;$SHARED_DIR\/.env&quot; &quot;$RELEASE_DIR\/.env&quot;\nmkdir -p &quot;$SHARED_DIR\/uploads&quot;\nln -sfn &quot;$SHARED_DIR\/uploads&quot; &quot;$RELEASE_DIR\/uploads&quot;\n\n# If PHP\/Laravel:\n# ln -sfn &quot;$SHARED_DIR\/storage&quot; &quot;$RELEASE_DIR\/storage&quot;\n# mkdir -p &quot;$SHARED_DIR\/storage&quot; &amp;&amp; chown -R deploy:deploy &quot;$SHARED_DIR\/storage&quot;\n\nlog &quot;Installing dependencies (if needed)&quot;\n# Example for Node:\nif [ -f &quot;$RELEASE_DIR\/package.json&quot; ]; then\n  (cd &quot;$RELEASE_DIR&quot; &amp;&amp; npm ci --omit=dev)\n  (cd &quot;$RELEASE_DIR&quot; &amp;&amp; npm run build || true)\nfi\n\n# Example for PHP\/Laravel:\n# if [ -f &quot;$RELEASE_DIR\/composer.json&quot; ]; then\n#   (cd &quot;$RELEASE_DIR&quot; &amp;&amp; composer install --no-dev --optimize-autoloader)\n#   (cd &quot;$RELEASE_DIR&quot; &amp;&amp; php artisan config:cache &amp;&amp; php artisan route:cache || true)\n# fi\n\nlog &quot;Optional migrations&quot;\n# If you must migrate, keep them backward-compatible.\n# e.g., (cd &quot;$RELEASE_DIR&quot; &amp;&amp; php artisan migrate --force)\n\nlog &quot;Warming up the app (optional)&quot;\n# e.g., curl --fail -sS http:\/\/127.0.0.1:3000\/health || true\n\nlog &quot;Switching current symlink atomically&quot;\nln -sfn &quot;$RELEASE_DIR&quot; &quot;$CURRENT_LINK&quot;\n\nlog &quot;Reloading systemd service&quot;\nsudo systemctl reload &quot;$SERVICE&quot; || sudo systemctl restart &quot;$SERVICE&quot;\n\nlog &quot;Cleaning up old releases&quot;\nls -1dt &quot;$RELEASES_DIR&quot;\/* | tail -n +$((KEEP_RELEASES+1)) | xargs -r rm -rf --\n\nlog &quot;Deployment complete!&quot;\n<\/code><\/pre>\n<p>A few notes from the trenches:<\/p>\n<p>First, always assume migrations can hurt if they\u2019re not backward\u2011compatible. If you\u2019re adding columns or tables, life is easy. If you\u2019re dropping columns that the old code still expects, you\u2019re asking for trouble mid\u2011deploy. Feature flags and two\u2011step migrations are your friend.<\/p>\n<p>Second, the <strong>ln -sfn<\/strong> move is atomic when the symlink stays on the same filesystem. That\u2019s why we keep everything inside <strong>\/var\/www\/myapp<\/strong>. No cross-device shenanigans.<\/p>\n<p>Third, <strong>reload vs restart<\/strong>: if your app supports a graceful reload on SIGHUP, use it. Otherwise, a quick restart right after switching the symlink is usually fine and shouldn\u2019t cause downtime when your web server keeps connections steady for a heartbeat.<\/p>\n<h2 id=\"section-4\"><span id=\"rsync_From_CI_Getting_Files_to_the_Server_Fast_and_Safely\">rsync From CI: Getting Files to the Server Fast (and Safely)<\/span><\/h2>\n<p>Now to the rsync part. On your CI runner, you\u2019ll checkout your code, build artifacts if needed, and then rsync to the <strong>new release path<\/strong> on the server. I usually rsync to a temporary directory first, then let the remote deploy script do the linking and switching. Here\u2019s a neat, minimal rsync step you can adapt:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">rsync -az --delete \n  -e &quot;ssh -o StrictHostKeyChecking=yes -p 22&quot; \n  --exclude &quot;.git&quot; \n  .\/ deploy@your-server:\/var\/www\/myapp\/releases\/$TIMESTAMP\/\n<\/code><\/pre>\n<p>Two notes I never skip: include <strong>&#8211;delete<\/strong> to avoid cruft building up, and set <strong>StrictHostKeyChecking<\/strong> to prevent man-in-the-middle surprises. You can pre\u2011seed known_hosts in CI using ssh-keyscan. It\u2019s an extra step, but one of those worth\u2011it steps.<\/p>\n<h2 id=\"section-5\"><span id=\"GitHub_Actions_A_Friendly_Workflow_That_Just_Works\">GitHub Actions: A Friendly Workflow That Just Works<\/span><\/h2>\n<p>GitHub Actions has become my go\u2011to for small to midsize teams. Secrets are easy to manage, and the YAML is readable. Here\u2019s a template that assumes you\u2019re shipping a Node app; adapt the build steps to your stack. It rsyncs the code, then calls the remote deploy script we wrote earlier.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">name: Deploy to VPS\n\non:\n  push:\n    branches: [ &quot;main&quot; ]\n\nconcurrency:\n  group: production-deploy\n  cancel-in-progress: true\n\njobs:\n  deploy:\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout\n        uses: actions\/checkout@v4\n\n      - name: Set release timestamp\n        id: set_ts\n        run: echo &quot;ts=$(date +%Y%m%d%H%M%S)&quot; &gt;&gt; $GITHUB_OUTPUT\n\n      - name: Install Node (if needed)\n        uses: actions\/setup-node@v4\n        with:\n          node-version: '20'\n\n      - name: Build artifacts\n        run: |\n          npm ci --omit=dev\n          npm run build\n\n      - name: Add server to known_hosts\n        run: |\n          mkdir -p ~\/.ssh\n          ssh-keyscan -p 22 your-server &gt;&gt; ~\/.ssh\/known_hosts\n\n      - name: Upload files with rsync\n        env:\n          SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}\n        run: |\n          eval &quot;$(ssh-agent -s)&quot;\n          ssh-add - &lt;&lt;&amp;lt &quot;$SSH_PRIVATE_KEY&quot;\n          rsync -az --delete \n            -e &quot;ssh -o StrictHostKeyChecking=yes -p 22&quot; \n            --exclude &quot;.git&quot; \n            .\/ deploy@your-server:\/var\/www\/myapp\/releases\/${{ steps.set_ts.outputs.ts }}\/\n\n      - name: Run remote deploy script\n        env:\n          SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}\n        run: |\n          eval &quot;$(ssh-agent -s)&quot;\n          ssh-add - &lt;&lt;&amp;lt &quot;$SSH_PRIVATE_KEY&quot;\n          ssh deploy@your-server &quot;TIMESTAMP=${{ steps.set_ts.outputs.ts }} bash -s&quot; &lt; .\/scripts\/remote_deploy.sh\n<\/code><\/pre>\n<p>I keep the <em>remote_deploy.sh<\/em> script in the repo under <strong>scripts\/<\/strong> so I can version it alongside the app. And yes, I\u2019ve made typos that forced an emergency edit mid\u2011deploy. Versioning the script saved me more than once.<\/p>\n<p>If you\u2019re new to Actions, the official <a href=\"https:\/\/docs.github.com\/actions\" rel=\"nofollow noopener\" target=\"_blank\">GitHub Actions documentation<\/a> is great for sanity checks when something looks odd in your logs.<\/p>\n<h2 id=\"section-6\"><span id=\"GitLab_CI_A_Parallel_Path_With_Familiar_Moves\">GitLab CI: A Parallel Path With Familiar Moves<\/span><\/h2>\n<p>GitLab CI has a similar rhythm. You define a deploy job that runs on pushes or tags, inject an SSH key, rsync the release, then call the remote script. Here\u2019s a simple .gitlab-ci.yml to get you moving:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">stages:\n  - build\n  - deploy\n\nvariables:\n  GIT_STRATEGY: fetch\n\nbuild:\n  stage: build\n  image: node:20-alpine\n  script:\n    - npm ci --omit=dev\n    - npm run build\n  artifacts:\n    paths:\n      - dist\/\n      - package.json\n      - package-lock.json\n\nproduction-deploy:\n  stage: deploy\n  image: alpine:latest\n  only:\n    - main\n  before_script:\n    - apk add --no-cache openssh-client rsync bash\n    - mkdir -p ~\/.ssh\n    - echo &quot;$SSH_PRIVATE_KEY&quot; | tr -d 'r' | ssh-add -\n    - ssh-keyscan -p 22 your-server &gt;&gt; ~\/.ssh\/known_hosts\n    - export TIMESTAMP=$(date +%Y%m%d%H%M%S)\n  script:\n    - rsync -az --delete -e &quot;ssh -p 22&quot; --exclude &quot;.git&quot; .\/ deploy@your-server:\/var\/www\/myapp\/releases\/$TIMESTAMP\/\n    - ssh deploy@your-server &quot;TIMESTAMP=$TIMESTAMP bash -s&quot; &lt; .\/scripts\/remote_deploy.sh\n<\/code><\/pre>\n<p>Secrets live in GitLab\u2019s CI\/CD variables. Keep your private key read\u2011only, scoped to the project, and preferably protected for main or tags. If you\u2019re coming from GitHub, the <a href=\"https:\/\/docs.gitlab.com\/ee\/ci\/\" rel=\"nofollow noopener\" target=\"_blank\">GitLab CI documentation<\/a> maps the pieces well.<\/p>\n<h2 id=\"section-7\"><span id=\"Health_Checks_Migrations_and_the_If_Something_Feels_Risky_Button\">Health Checks, Migrations, and the \u201cIf Something Feels Risky\u201d Button<\/span><\/h2>\n<p>Every zero\u2011downtime story has a chapter about health checks. Before switching the symlink, I like to curl a local endpoint that does a quick self\u2011assessment\u2014DB connection, cache access, maybe a trivial read\/write test depending on the app. If the health check fails, I abort the switch. Better to leave the old version running than to promote a half\u2011healthy build.<\/p>\n<p>For migrations, here\u2019s my no\u2011drama approach: make them backward\u2011compatible or defer them. Add columns and tables first; deploy code that can write to the new structure while still reading the old; then follow up with a second deploy that drops old columns. Yes, it\u2019s slower. But the number of 3 a.m. Slack pings it prevents? Worth it.<\/p>\n<p>If you\u2019re deploying a PHP\/Laravel app and want the whole production tune\u2011up playbook\u2014process managers, OPcache, Horizon and queues\u2014check out <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-prod-ortam-optimizasyonu-nasil-yapilir-php%E2%80%91fpm-opcache-octane-queue-horizon-ve-redisi-el-ele-calistirmak\/\">the Laravel production tune\u2011up I do on every server<\/a>. It pairs nicely with this release strategy, especially when you want queue workers to roll forward without dropping jobs.<\/p>\n<h2 id=\"section-8\"><span id=\"Rollbacks_The_FiveMinute_Seatbelt\">Rollbacks: The Five\u2011Minute Seatbelt<\/span><\/h2>\n<p>One of my clients once shipped a change that looked perfect in staging but went sideways in production due to a quirky data edge case. No shame in that\u2014it happens to all of us. The reason we didn\u2019t break a sweat was the rollback: we simply pointed <em>current<\/em> back to the previous release and reloaded. Done.<\/p>\n<p>I usually keep five releases on the server. Here\u2019s a tiny rollback utility I like to stash as <em>scripts\/rollback.sh<\/em>:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">#!\/usr\/bin\/env bash\nset -euo pipefail\nAPP_DIR=&quot;\/var\/www\/myapp&quot;\nRELEASES_DIR=&quot;$APP_DIR\/releases&quot;\nCURRENT_LINK=&quot;$APP_DIR\/current&quot;\nSERVICE=&quot;myapp.service&quot;\n\nreleases=( $(ls -1dt &quot;$RELEASES_DIR&quot;\/*) )\nif [ ${#releases[@]} -lt 2 ]; then\n  echo &quot;Not enough releases to roll back&quot;; exit 1\nfi\n\ncurrent_target=$(readlink -f &quot;$CURRENT_LINK&quot;)\nfor r in &quot;${releases[@]}&quot;; do\n  if [ &quot;$r&quot; != &quot;$current_target&quot; ]; then\n    echo &quot;Rolling back to: $r&quot;\n    ln -sfn &quot;$r&quot; &quot;$CURRENT_LINK&quot;\n    sudo systemctl reload &quot;$SERVICE&quot; || sudo systemctl restart &quot;$SERVICE&quot;\n    exit 0\n  fi\ndone\n\necho &quot;No previous release found to roll back to&quot;; exit 1\n<\/code><\/pre>\n<p>Rollbacks are invisible if you\u2019ve kept your DB migrations backward\u2011compatible. If you didn\u2019t, the rollback might not be happy. That\u2019s why I try hard to make schema changes tolerant for a release or two.<\/p>\n<h2 id=\"section-9\"><span id=\"Permissions_Ownership_and_Other_Gotchas\">Permissions, Ownership, and Other Gotchas<\/span><\/h2>\n<p>A classic \u201cwhy is this failing only on the server?\u201d moment often comes down to permissions. Keep ownership consistent\u2014typically <strong>deploy:deploy<\/strong>\u2014and be mindful when your web server user (often <em>www-data<\/em>) needs write access to uploaded files. I lean on the <strong>shared<\/strong> directory for anything that needs persistent writes and then symlink it into each release. The app itself, once built, can usually be read\u2011only.<\/p>\n<p>Another gotcha is file watchers and temporary files\u2014especially for Node or SPA builds. Exclude node_modules if you\u2019re building in CI and shipping a dist folder. If you\u2019re building on the server, cache dependencies in the shared directory and symlink them in to avoid reinstalling every time.<\/p>\n<p>Finally, be mindful of the web server config. If you point Nginx to <em>current\/public<\/em> for one app and <em>current<\/em> for another, hell hath no fury like a forgotten trailing slash. I\u2019ve learned to double\u2011check the server blocks before the first deploy.<\/p>\n<h2 id=\"section-10\"><span id=\"Security_and_Secrets_Keep_the_Keys_Where_They_Belong\">Security and Secrets: Keep the Keys Where They Belong<\/span><\/h2>\n<p>Security doesn\u2019t have to be complicated. A few guardrails go a long way:<\/p>\n<p>First, use a dedicated <strong>deploy<\/strong> user with limited privileges. Give it exactly what it needs\u2014nothing more. Second, keep the private key in CI secrets, and add the server to <em>known_hosts<\/em> in the pipeline before SSH. Third, store runtime secrets in <strong>shared\/.env<\/strong> on the server, not in the repo or CI logs. Your deploy pipeline should ship code, not secrets.<\/p>\n<p>If you want a friendly tour of adding a WAF layer and taming noisy bots while you\u2019re at it, I wrote about my approach in <a href=\"https:\/\/www.dchost.com\/blog\/en\/waf-ve-bot-korumasi-cloudflare-modsecurity-ve-fail2bani-ayni-masada-baristirmanin-sicacik-hikayesi\/\">the layered shield I trust with Cloudflare, ModSecurity, and Fail2ban<\/a>. It pairs well once you\u2019ve got a calm deploy story.<\/p>\n<h2 id=\"section-11\"><span id=\"Graceful_Reloads_Making_systemd_and_Your_App_Shake_Hands\">Graceful Reloads: Making systemd and Your App Shake Hands<\/span><\/h2>\n<p>Most apps don\u2019t love being killed mid\u2011request. If yours can handle a graceful reload\u2014reloading config and code without dropping connections\u2014wire that up via <strong>ExecReload<\/strong> and a signal like SIGHUP. If not, do a quick restart right after switching the symlink and let the web server handle any lingering requests for a moment.<\/p>\n<p>On PHP\u2011FPM, <em>reload<\/em> is your friend. On Node, you might wrap the process with a manager that knows how to swap workers. For some stacks, a fast restart is effectively the same as a reload, especially when the app initializes quickly and your upstream (Nginx) is patient.<\/p>\n<h2 id=\"section-12\"><span id=\"Observability_The_Canary_That_Sings_Before_Users_Do\">Observability: The Canary That Sings Before Users Do<\/span><\/h2>\n<p>The first time I watched a deploy trigger a spike in error logs\u2014before users noticed\u2014I became a monitoring believer. After wiring up zero\u2011downtime deploys, make sure you can actually see what happens during and after. If you want a simple place to start, have a look at <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-uyari-nasil-kurulur-prometheus-grafana-ve-node-exporter-ile-sessiz-alarmlari-konusturmak\/\">the playbook I use to keep a VPS calm with Prometheus and Grafana<\/a> or the friendlier intro with Uptime Kuma in <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">VPS monitoring and alerts without tears<\/a>. A couple of smart alerts will tell you if error rates climb or latency creeps after a release.<\/p>\n<h2 id=\"section-13\"><span id=\"HTTP2_HTTP3_and_Other_PostDeploy_Polishing\">HTTP\/2, HTTP\/3, and Other Post\u2011Deploy Polishing<\/span><\/h2>\n<p>Once your pipeline is boring (in the best way), you can afford to tune the rest of the stack without risking drama. I like enabling HTTP\/2 and HTTP\/3 in front of apps because it helps with perceived speed and connection efficiency\u2014especially for asset\u2011heavy frontends. If you want the full walkthrough, I\u2019ve shared my step\u2011by\u2011step in <a href=\"https:\/\/www.dchost.com\/blog\/en\/nginx-ve-cloudflareda-http-2-ve-http-3-quic-nasil-etkinlestirilir-wordpress-icin-uctan-uca-kurulum-ve-test-rehberi\/\">the end\u2011to\u2011end playbook for enabling HTTP\/2 and HTTP\/3 with Nginx and Cloudflare<\/a>.<\/p>\n<h2 id=\"section-14\"><span id=\"Backups_and_Safety_Nets_Sleep_Better_Deploy_Happier\">Backups and Safety Nets: Sleep Better, Deploy Happier<\/span><\/h2>\n<p>When you\u2019re confident that you can roll forward or back and that your data is safe, deploys lose their sting. I always pair release automation with offsite backups. If your VPS vanished tomorrow, could you restore? If not, treat it as your next task. For a practical, low\u2011drama setup with versioning and encryption, take a look at <a href=\"https:\/\/www.dchost.com\/blog\/en\/restic-ve-borg-ile-s3-uyumlu-uzak-yedekleme-surumleme-sifreleme-ve-saklama-ne-zaman-nasil\/\">my friendly guide to offsite backups using Restic or Borg to S3\u2011compatible storage<\/a>. It\u2019s the quiet hero of many recoveries.<\/p>\n<h2 id=\"section-15\"><span id=\"Extra_Notes_and_Little_Tricks_from_Real_Projects\">Extra Notes and Little Tricks from Real Projects<\/span><\/h2>\n<p>Here are a few tidy habits that make life easier:<\/p>\n<p>Use <strong>concurrency<\/strong> in CI so only one deploy for a given environment runs at a time. It\u2019s a tiny line in YAML that prevents a surprising amount of chaos. Version your deploy scripts alongside the app to track which release used which logic. And keep your <strong>cleanup<\/strong> step ruthless\u2014five releases is usually plenty unless you do deep forensic debugging.<\/p>\n<p>If you\u2019re setting this up for Laravel specifically, you might enjoy my longer story about queues, Horizon, and rolling releases in <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-uygulamalarini-vpste-nasil-yayinlarim-nginx-php%E2%80%91fpm-horizon-ve-sifir-kesinti-dagitimin-sicacik-yol-haritasi\/\">the no\u2011drama Laravel on VPS playbook<\/a>. It shows how all these pieces line up for a smooth app lifecycle, not just code pushes.<\/p>\n<h2 id=\"section-16\"><span id=\"The_Minimal_Repeatable_Checklist\">The Minimal, Repeatable Checklist<\/span><\/h2>\n<p>Let\u2019s compress this into the mental checklist I run every time:<\/p>\n<p>First, the server has a <strong>deploy<\/strong> user and the <strong>\/var\/www\/myapp<\/strong> structure with <em>releases<\/em>, <em>shared<\/em>, and a <em>current<\/em> symlink. The web server points to <em>current<\/em>. A systemd unit is ready to reload or restart. Second, CI knows how to build and rsync to a timestamped release directory. Third, the remote script links shared files, installs dependencies, runs optional migrations, health checks, switches the symlink, reloads the service, and cleans up old releases. Fourth, monitoring watches for error spikes, and backups stand by if the unthinkable happens.<\/p>\n<p>Once you\u2019ve done this once or twice, it becomes muscle memory. And that\u2019s when deploys stop being a rollercoaster and become just another calm step in your day.<\/p>\n<h2 id=\"section-17\"><span id=\"WrapUp_A_Calm_Pipeline_Youll_Actually_Trust\">Wrap\u2011Up: A Calm Pipeline You\u2019ll Actually Trust<\/span><\/h2>\n<p>I still remember the first time I shipped a Friday evening fix without holding my breath. The logs were quiet, the status page stayed green, and my coffee was still warm when I closed the laptop. That\u2019s the feeling I want for you: a predictable pipeline that gets out of your way.<\/p>\n<p>Zero\u2011downtime CI\/CD to a VPS isn\u2019t rocket science. It\u2019s a pattern: rsync for speed, symlinked releases for atomic switches, and systemd for steady supervision. Whether you use GitHub Actions or GitLab CI, the flow barely changes. Ship the code, prepare the release offstage, health check it, then flip the symlink and reload. If something feels off, roll back in seconds and regroup.<\/p>\n<p>If you remember nothing else, remember this: favor small, backward\u2011compatible changes; keep secrets on the server; and invest in monitoring and backups before you need them. The rest is just muscle memory. Hope this was helpful! If you try this flow and get stuck, reach out\u2014I\u2019ve probably stumbled over the same rock and I\u2019m happy to help you step around it next time.<\/p>\n<hr>\n<p>Further reading that pairs well with this guide:<\/p>\n<ul>\n<li><a href=\"https:\/\/docs.github.com\/actions\" rel=\"nofollow noopener\" target=\"_blank\">GitHub Actions documentation<\/a> for workflow syntax and runners<\/li>\n<li><a href=\"https:\/\/docs.gitlab.com\/ee\/ci\/\" rel=\"nofollow noopener\" target=\"_blank\">GitLab CI documentation<\/a> for pipeline configuration and variables<\/li>\n<li><a href=\"https:\/\/www.freedesktop.org\/software\/systemd\/man\/systemd.service.html\" rel=\"nofollow noopener\" target=\"_blank\">systemd service reference<\/a> for graceful reloads and unit options<\/li>\n<\/ul>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>So there I was on a Friday evening, staring at a progress bar that felt like it was holding my entire weekend hostage. You know that sinking feeling when a deploy hits a snag and the site returns a gateway timeout? I\u2019ve been there more times than I\u2019d like to admit. The fix that finally [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1484,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1483","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1483","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1483"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1483\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1484"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1483"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1483"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1483"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}