{"id":1459,"date":"2025-11-06T23:13:13","date_gmt":"2025-11-06T20:13:13","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/my-no%e2%80%91drama-playbook-for-deploying-laravel-on-a-vps-nginx-php%e2%80%91fpm-queues-horizon-and-truly-zero%e2%80%91downtime-releases\/"},"modified":"2025-11-06T23:13:13","modified_gmt":"2025-11-06T20:13:13","slug":"my-no%e2%80%91drama-playbook-for-deploying-laravel-on-a-vps-nginx-php%e2%80%91fpm-queues-horizon-and-truly-zero%e2%80%91downtime-releases","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/my-no%e2%80%91drama-playbook-for-deploying-laravel-on-a-vps-nginx-php%e2%80%91fpm-queues-horizon-and-truly-zero%e2%80%91downtime-releases\/","title":{"rendered":"My No\u2011Drama Playbook for Deploying Laravel on a VPS: Nginx, PHP\u2011FPM, Queues\/Horizon and Truly Zero\u2011Downtime Releases"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>So there I was, sipping a late coffee while a client nervously asked, &#8220;Can we deploy without taking the site down?&#8221; If you\u2019ve ever watched a spinning loader during a deploy and prayed your users wouldn\u2019t notice, you\u2019re not alone. I\u2019ve been there. I\u2019ve pushed code and then stared at the terminal like it owed me money. Over time, I pieced together a reliable flow that keeps Laravel deployments smooth on a <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>, even when traffic spikes at the worst possible moment. In this article, I want to walk you through that playbook: Nginx as the steady front door, PHP\u2011FPM doing the heavy lifting, Horizon keeping queues in line, and a zero\u2011downtime release strategy that feels like flipping a light switch.<\/p>\n<p>We\u2019ll go step by step, but not like a dry checklist. Think of it as a friend showing you what worked, what didn\u2019t, and the tiny details that quietly make everything robust. We\u2019ll set expectations, align the moving parts, and get to a place where your deploys feel boring \u2014 the best compliment a deployment can get.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#The_Mental_Model_How_a_Laravel_App_Lives_on_a_VPS\"><span class=\"toc_number toc_depth_1\">1<\/span> The Mental Model: How a Laravel App Lives on a VPS<\/a><\/li><li><a href=\"#Nginx_PHPFPM_The_Calm_Fast_Front_Door\"><span class=\"toc_number toc_depth_1\">2<\/span> Nginx + PHP\u2011FPM: The Calm, Fast Front Door<\/a><ul><li><a href=\"#Server_Block_I_Keep_Coming_Back_To\"><span class=\"toc_number toc_depth_2\">2.1<\/span> Server Block I Keep Coming Back To<\/a><\/li><li><a href=\"#PHPFPM_The_Kitchen_That_Never_Panics\"><span class=\"toc_number toc_depth_2\">2.2<\/span> PHP\u2011FPM: The Kitchen That Never Panics<\/a><\/li><\/ul><\/li><li><a href=\"#The_Shared_Folders_and_Permissions_That_Save_You_Headaches\"><span class=\"toc_number toc_depth_1\">3<\/span> The Shared Folders and Permissions That Save You Headaches<\/a><\/li><li><a href=\"#ZeroDowntime_Releases_The_Quiet_Switch\"><span class=\"toc_number toc_depth_1\">4<\/span> Zero\u2011Downtime Releases: The Quiet Switch<\/a><ul><li><a href=\"#The_Flow_I_Use_And_Why_Its_Calm\"><span class=\"toc_number toc_depth_2\">4.1<\/span> The Flow I Use (And Why It\u2019s Calm)<\/a><\/li><li><a href=\"#What_About_Env_Changes\"><span class=\"toc_number toc_depth_2\">4.2<\/span> What About Env Changes?<\/a><\/li><\/ul><\/li><li><a href=\"#Queues_and_Horizon_Keeping_Heavy_Lifting_Off_the_Front_Door\"><span class=\"toc_number toc_depth_1\">5<\/span> Queues and Horizon: Keeping Heavy Lifting Off the Front Door<\/a><ul><li><a href=\"#Systemd_Services_I_Trust\"><span class=\"toc_number toc_depth_2\">5.1<\/span> Systemd Services I Trust<\/a><\/li><li><a href=\"#ZeroDowntime_Deploys_With_Horizon_Running\"><span class=\"toc_number toc_depth_2\">5.2<\/span> Zero\u2011Downtime Deploys With Horizon Running<\/a><\/li><li><a href=\"#Retries_Failures_and_Preventable_Pain\"><span class=\"toc_number toc_depth_2\">5.3<\/span> Retries, Failures, and Preventable Pain<\/a><\/li><\/ul><\/li><li><a href=\"#Config_Caches_OPcache_and_a_Quick_Word_on_Octane\"><span class=\"toc_number toc_depth_1\">6<\/span> Config Caches, OPcache, and a Quick Word on Octane<\/a><\/li><li><a href=\"#Rolling_Back_Without_Breaking_a_Sweat\"><span class=\"toc_number toc_depth_1\">7<\/span> Rolling Back Without Breaking a Sweat<\/a><\/li><li><a href=\"#Logs_Metrics_and_Knowing_When_Somethings_Off\"><span class=\"toc_number toc_depth_1\">8<\/span> Logs, Metrics, and Knowing When Something\u2019s Off<\/a><\/li><li><a href=\"#Security_and_TLS_Without_the_Drama\"><span class=\"toc_number toc_depth_1\">9<\/span> Security and TLS Without the Drama<\/a><\/li><li><a href=\"#Backups_Snapshots_and_8220Oops8221_Protection\"><span class=\"toc_number toc_depth_1\">10<\/span> Backups, Snapshots, and &#8220;Oops&#8221; Protection<\/a><\/li><li><a href=\"#Putting_It_All_Together_A_Day_in_the_Life_of_a_Deploy\"><span class=\"toc_number toc_depth_1\">11<\/span> Putting It All Together: A Day in the Life of a Deploy<\/a><\/li><li><a href=\"#Troubleshooting_The_Little_Things_That_Bite\"><span class=\"toc_number toc_depth_1\">12<\/span> Troubleshooting: The Little Things That Bite<\/a><\/li><li><a href=\"#A_Quick_Note_on_Hardware_Sizing\"><span class=\"toc_number toc_depth_1\">13<\/span> A Quick Note on Hardware Sizing<\/a><\/li><li><a href=\"#Extras_That_Make_Life_Nicer\"><span class=\"toc_number toc_depth_1\">14<\/span> Extras That Make Life Nicer<\/a><\/li><li><a href=\"#Wrap-Up_The_Calm_After_the_Deploy\"><span class=\"toc_number toc_depth_1\">15<\/span> Wrap-Up: The Calm After the Deploy<\/a><ul><li><a href=\"#Further_Reading_and_Handy_Docs\"><span class=\"toc_number toc_depth_2\">15.1<\/span> Further Reading and Handy Docs<\/a><\/li><\/ul><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"The_Mental_Model_How_a_Laravel_App_Lives_on_a_VPS\">The Mental Model: How a Laravel App Lives on a VPS<\/span><\/h2>\n<p>Before touching configs, let\u2019s agree on a mental model. Your VPS is a small apartment building. Nginx sits at the lobby door, greeting every visitor and directing them to the right room. PHP\u2011FPM is upstairs in the kitchen, cooking every dynamic request to order. Horizon is the team in the back room handling background jobs so the people in the lobby aren\u2019t stuck waiting for their latte. And your deploy process is the quiet hallway swap that happens so fast the tenants don\u2019t notice the floors got polished.<\/p>\n<p>Here\u2019s the thing: once you think of it like that, you stop stuffing everything into one big process. Nginx serves static files quickly and hands dynamic requests to PHP\u2011FPM. PHP\u2011FPM pools let you tune capacity without touching the front door. Queues keep slow tasks out of the request lifecycle. And deployments become replacing a symlink, not overwriting files in place while people are walking through the door. We\u2019ll wire it all together gently, then tune it with a few production lessons I learned the hard way.<\/p>\n<h2 id=\"section-2\"><span id=\"Nginx_PHPFPM_The_Calm_Fast_Front_Door\">Nginx + PHP\u2011FPM: The Calm, Fast Front Door<\/span><\/h2>\n<p>When I set up Nginx for Laravel, my goal is simple: keep it predictable. I like clean server blocks, a clear document root, and a fastcgi pass that doesn\u2019t try to be clever. I also avoid spraying rewrite rules everywhere. Laravel already knows how to route \u2014 we just need to hand requests to <strong>index.php<\/strong> safely.<\/p>\n<h3><span id=\"Server_Block_I_Keep_Coming_Back_To\">Server Block I Keep Coming Back To<\/span><\/h3>\n<p>This server block does a few important things. It serves real files directly, hands everything else to PHP, and respects the path info without exposing it.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">server {\n    listen 80;\n    listen [::]:80;\n    server_name example.com www.example.com;\n\n    root \/var\/www\/myapp\/current\/public;\n    index index.php;\n\n    # If you terminate TLS here, upgrade to HTTPS and add HSTS in SSL block\n    # return 301 https:\/\/$host$request_uri;\n\n    location \/ {\n        try_files $uri $uri\/ \/index.php?$query_string;\n    }\n\n    location ~ .php$ {\n        include fastcgi_params;\n        fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;\n        fastcgi_param DOCUMENT_ROOT $realpath_root;\n        fastcgi_pass unix:\/run\/php\/php8.2-fpm.sock; # match your PHP version\n        fastcgi_index index.php;\n        fastcgi_buffers 16 16k;\n        fastcgi_buffer_size 32k;\n    }\n\n    location ~* .(?:jpg|jpeg|gif|png|webp|svg|css|js|ico|woff2?)$ {\n        expires 30d;\n        access_log off;\n        add_header Cache-Control &quot;public, max-age=2592000&quot;;\n    }\n\n    client_max_body_size 20m; # tune for uploads\n\n    # Useful security headers; tune for your app\n    add_header X-Frame-Options &quot;SAMEORIGIN&quot; always;\n    add_header X-Content-Type-Options &quot;nosniff&quot; always;\n    add_header Referrer-Policy &quot;strict-origin-when-cross-origin&quot; always;\n}\n<\/code><\/pre>\n<p>Two subtle details: first, <strong>SCRIPT_FILENAME<\/strong> and <strong>DOCUMENT_ROOT<\/strong> use <em>$realpath_root<\/em>. That\u2019s a quiet win when you deploy with symlinks because it resolves to the actual release folder. Second, I keep the cache headers for static assets generous. Mix or Vite fingerprints your files, so long cache lives are your friend.<\/p>\n<p>If you\u2019re going to terminate TLS on the server (many do), you\u2019ll want to harden that layer as well. I\u2019ve shared a practical checklist of <a href=\"https:\/\/www.dchost.com\/blog\/en\/nginxte-tls-1-3-ocsp-stapling-ve-brotli-nasil-kurulur-hizli-ve-guvenli-httpsnin-sicacik-rehberi\/\">TLS 1.3, OCSP stapling and Brotli settings for Nginx I keep reusing<\/a>. It\u2019s the kind of thing you configure once and smile every time you run a test.<\/p>\n<h3><span id=\"PHPFPM_The_Kitchen_That_Never_Panics\">PHP\u2011FPM: The Kitchen That Never Panics<\/span><\/h3>\n<p>PHP\u2011FPM is where capacity planning quietly happens. In my experience, a single busy queue worker stuck on a heavy job can starve the pool if you share it with web requests. I like to separate pools for &#8220;web&#8221; and &#8220;workers&#8221; so the front door never gets blocked. Here\u2019s a simple web pool to get started:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">[www]\nuser = www-data\ngroup = www-data\nlisten = \/run\/php\/php8.2-fpm.sock\npm = ondemand\npm.max_children = 20\npm.process_idle_timeout = 10s\npm.max_requests = 1000\n\n; Friendly slowlog for surprises\nrequest_terminate_timeout = 60s\nrequest_slowlog_timeout = 3s\nslowlog = \/var\/log\/php8.2-fpm\/www-slow.log\n\n; OPcache is configured in php.ini\n<\/code><\/pre>\n<p>Sometimes I\u2019ll switch to <strong>pm=dynamic<\/strong> for long-lived workloads; sometimes I also cap <strong>pm.max_children<\/strong> tightly to protect memory. It\u2019s not a law; it\u2019s tuning. A quick sanity check is to match the number to what your RAM and PHP memory_limit can realistically support. If you want a deeper dive into how I think about pools, OPcache, and Redis for Laravel, I wrote <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-prod-ortam-optimizasyonu-nasil-yapilir-php%E2%80%91fpm-opcache-octane-queue-horizon-ve-redisi-el-ele-calistirmak\/\">the production tune-up I do on every Laravel server<\/a> that might be a handy side read.<\/p>\n<h2 id=\"section-3\"><span id=\"The_Shared_Folders_and_Permissions_That_Save_You_Headaches\">The Shared Folders and Permissions That Save You Headaches<\/span><\/h2>\n<p>Before we even deploy, set up the directory structure that makes zero-downtime feel easy. I like the classic releases\/current\/shared layout. It\u2019s a trick I picked up after one too many deploys that mixed runtime files with source code.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">\/var\/www\/myapp\n\u251c\u2500\u2500 current -&gt; \/var\/www\/myapp\/releases\/2024-11-05-121500\n\u251c\u2500\u2500 releases\n\u2502   \u251c\u2500\u2500 2024-11-05-121500\n\u2502   \u2514\u2500\u2500 2024-11-04-230501\n\u2514\u2500\u2500 shared\n    \u251c\u2500\u2500 storage\n    \u2514\u2500\u2500 .env\n<\/code><\/pre>\n<p>Laravel\u2019s <strong>storage<\/strong> and <strong>.env<\/strong> should live in <em>shared<\/em>, then symlinked into each release. That way you can deploy code without touching logs, caches, temporary files, or your environment configuration. It\u2019s mundane, but it\u2019s everything.<\/p>\n<p>For permissions, I keep it simple: the code belongs to a deploy user, and the web user (often www-data) has write access only to storage and bootstrap\/cache. I suppose you can script this from your deploy flow, but I like to keep a tiny helper to enforce state:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># grant web user group access to writable paths\nchgrp -R www-data storage bootstrap\/cache\nchmod -R ug+rwx storage bootstrap\/cache\n\n# ensure files aren\u2019t world writable by mistake\nchmod -R o-rwx storage bootstrap\/cache\n<\/code><\/pre>\n<p>It\u2019s boring, which is precisely why it\u2019s good. Clean boundaries keep deployments predictable.<\/p>\n<h2 id=\"section-4\"><span id=\"ZeroDowntime_Releases_The_Quiet_Switch\">Zero\u2011Downtime Releases: The Quiet Switch<\/span><\/h2>\n<p>Let me tell you about the first time I swapped a symlink for a Laravel app in production. It felt like magic. No reload banner, no gateway errors, just quiet success. The trick is to prep everything in a new release folder, verify it, and then atomically switch <strong>current<\/strong> to point at it. If something goes wrong, you point back. No drama.<\/p>\n<h3><span id=\"The_Flow_I_Use_And_Why_Its_Calm\">The Flow I Use (And Why It\u2019s Calm)<\/span><\/h3>\n<p>I keep a small deploy script that does the same thing every time: create a timestamped release directory, rsync the code, composer install with flags that respect production, link shared storage and env, build assets if needed, warm caches, run migrations safely, then atomically swap the symlink. Here\u2019s a friendly skeleton you can tailor:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">#!\/usr\/bin\/env bash\nset -euo pipefail\n\nAPP_DIR=\/var\/www\/myapp\nRELEASES=$APP_DIR\/releases\nSHARED=$APP_DIR\/shared\nNOW=$(date +&quot;%Y-%m-%d-%H%M%S&quot;)\nNEW_RELEASE=$RELEASES\/$NOW\n\nmkdir -p &quot;$NEW_RELEASE&quot;\n\n# Sync code (from CI workspace or git checkout)\nrsync -a --delete --exclude=node_modules --exclude=.git .\/ &quot;$NEW_RELEASE\/&quot;\n\n# Composer install in the release\ncd &quot;$NEW_RELEASE&quot;\ncomposer install --no-interaction --prefer-dist --optimize-autoloader --no-dev\n\n# Link shared files\nln -nfs &quot;$SHARED\/.env&quot; &quot;$NEW_RELEASE\/.env&quot;\nrm -rf &quot;$NEW_RELEASE\/storage&quot;\nln -nfs &quot;$SHARED\/storage&quot; &quot;$NEW_RELEASE\/storage&quot;\n\n# Build assets if you build on server (many prefer CI build+upload)\n# npm ci &amp;&amp; npm run build\n\n# Laravel cache warm-up\nphp artisan config:cache\nphp artisan route:cache\nphp artisan view:cache\n\n# Database migrations (safe mode)\nphp artisan migrate --force\n\n# Atomically swap current\nln -nfs &quot;$NEW_RELEASE&quot; &quot;$APP_DIR\/current&quot;\n\n# Reload PHP-FPM to clear opcache if needed\nsudo systemctl reload php8.2-fpm\n\n# Optionally clean old releases, keep last 5\nls -1dt $RELEASES\/* | tail -n +6 | xargs -r rm -rf\n<\/code><\/pre>\n<p>The atomic <strong>ln -nfs<\/strong> swap is the star of the show. It\u2019s nearly instantaneous, and Nginx doesn\u2019t need to reload to keep serving. If you aggressively cache routes and config, reloading PHP\u2011FPM is sometimes handy to clear OPcache, but even that\u2019s often optional when you bump opcache settings thoughtfully.<\/p>\n<p>One small habit that pays off: run migrations with <strong>&#8211;force<\/strong>, but also design migrations to be safe for zero-downtime. That means adding columns before backfilling, using nullable defaults where needed, and avoiding destructive changes during traffic. If I must do something risky, I schedule it for a low-traffic window or wrap it in a feature flag.<\/p>\n<h3><span id=\"What_About_Env_Changes\">What About Env Changes?<\/span><\/h3>\n<p>I don\u2019t like baking env changes into the code. Treat <strong>.env<\/strong> as part of the server state. If a deploy depends on new env keys, I add them to shared first, deploy, and then confirm the app reads them. This avoids weird edge cases where config caching or env loading behaves differently than expected.<\/p>\n<h2 id=\"section-5\"><span id=\"Queues_and_Horizon_Keeping_Heavy_Lifting_Off_the_Front_Door\">Queues and Horizon: Keeping Heavy Lifting Off the Front Door<\/span><\/h2>\n<p>Laravel is fast when you keep web requests light. Anything slow goes into a queue: sending emails, generating reports, crunching PDFs, hitting slow external APIs. Horizon makes queues feel like a dashboard with brains. You can watch workers, retry failed jobs, and scale smoothly.<\/p>\n<h3><span id=\"Systemd_Services_I_Trust\">Systemd Services I Trust<\/span><\/h3>\n<p>While Supervisor works, I\u2019ve grown to prefer systemd on modern distros. It\u2019s built-in, simple to reason about, and works nicely with logs. I keep separate services for Horizon and for one-off queue workers when I need them.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># \/etc\/systemd\/system\/horizon.service\n[Unit]\nDescription=Laravel Horizon\nAfter=network.target\n\n[Service]\nUser=www-data\nGroup=www-data\nRestart=always\nExecStart=\/usr\/bin\/php \/var\/www\/myapp\/current\/artisan horizon\nExecStop=\/usr\/bin\/php \/var\/www\/myapp\/current\/artisan horizon:terminate\nWorkingDirectory=\/var\/www\/myapp\/current\nEnvironment=APP_ENV=production\nKillSignal=SIGTERM\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target\n<\/code><\/pre>\n<p>Start and enable it with:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo systemctl daemon-reload\nsudo systemctl enable --now horizon\n<\/code><\/pre>\n<p>If you prefer dedicated workers for specific queues or experiments, you can define a focused service:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># \/etc\/systemd\/system\/queue-default.service\n[Unit]\nDescription=Laravel Queue Worker (default)\nAfter=network.target\n\n[Service]\nUser=www-data\nGroup=www-data\nRestart=always\nExecStart=\/usr\/bin\/php \/var\/www\/myapp\/current\/artisan queue:work --queue=default --sleep=1 --tries=3 --max-time=3600\nWorkingDirectory=\/var\/www\/myapp\/current\nEnvironment=APP_ENV=production\nKillSignal=SIGTERM\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target\n<\/code><\/pre>\n<p>Horizon makes scaling and balancing across queues ridiculously pleasant. You define your queue names, map them into Horizon\u2019s configuration, and then let it orchestrate workers based on load. The moment I added Horizon to team workflows, troubleshooting went from guesswork to &#8220;oh, that job\u2019s stuck because the API is slow; let\u2019s retry with backoff.&#8221; If you\u2019re new to it, the <a href=\"https:\/\/laravel.com\/docs\/horizon\" rel=\"nofollow noopener\" target=\"_blank\">official Horizon docs<\/a> are a great walkthrough.<\/p>\n<h3><span id=\"ZeroDowntime_Deploys_With_Horizon_Running\">Zero\u2011Downtime Deploys With Horizon Running<\/span><\/h3>\n<p>Here\u2019s where a tiny bit of choreography matters. During deployment, you want to switch code <em>after<\/em> the new release is ready and <em>before<\/em> workers pick up jobs that depend on the new code. Horizon has a built-in stop signal that\u2019s perfect for this. My usual rhythm looks like this:<\/p>\n<p>First, pause consumption by gently asking Horizon to terminate. It will finish current jobs and exit. Then deploy. Then bring Horizon back up on the new code.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Before the symlink swap\nphp \/var\/www\/myapp\/current\/artisan horizon:pause || true\nphp \/var\/www\/myapp\/current\/artisan horizon:terminate || true\n\n# Run your deployment steps and symlink swap here\n\n# After the swap\nsudo systemctl restart horizon\n<\/code><\/pre>\n<p>If you have critical real-time queues, you can run a blue-green pattern for workers as well: start workers pointed at the new release, wait a beat, then stop the old ones. But honestly, with Horizon and short-running jobs, the simple pause-terminate-restart flow works beautifully.<\/p>\n<h3><span id=\"Retries_Failures_and_Preventable_Pain\">Retries, Failures, and Preventable Pain<\/span><\/h3>\n<p>I\u2019ve had my share of 3 a.m. alerts because a job kept retrying until the queue looked like a Jenga tower. A few small habits prevent that: time-box jobs with <strong>timeout<\/strong>, use sensible <strong>tries<\/strong> or backoff strategies, and add idempotency to jobs that hit external APIs. Laravel\u2019s unique jobs make it hard for duplicates to slip in, and a <strong>Release<\/strong> strategy on certain exceptions can be kinder than a hard fail. Most importantly, separate your queues by purpose. Don\u2019t let &#8220;emails&#8221; fight for space with &#8220;image-processing&#8221;. Give them names and tune their worker counts accordingly in Horizon.<\/p>\n<h2 id=\"section-6\"><span id=\"Config_Caches_OPcache_and_a_Quick_Word_on_Octane\">Config Caches, OPcache, and a Quick Word on Octane<\/span><\/h2>\n<p>On every deploy, I warm up configuration, routes, and views. It\u2019s quiet performance you can feel. OPcache then holds your PHP bytecode in memory so you\u2019re not re-parsing files on every request. Make sure OPcache memory size fits your codebase, and enable <strong>opcache.validate_timestamps=0<\/strong> if you only reload PHP\u2011FPM on deploys. It\u2019s a neat trick: treat PHP\u2019s lifecycle like a webserver, and use deploy-time reloads to tell it &#8220;new code is here.&#8221;<\/p>\n<p>Octane is incredible when you\u2019re going for raw throughput, but it changes the game: long-lived workers mean you must carefully handle in-memory state and bootstrapping. If you\u2019re not ready for that, stick with FPM first and build muscle memory. When you feel the need for more speed, move gracefully into Octane with a dry run and monitoring. I talk about how I weave these together in <a href=\"https:\/\/www.dchost.com\/blog\/en\/laravel-prod-ortam-optimizasyonu-nasil-yapilir-php%E2%80%91fpm-opcache-octane-queue-horizon-ve-redisi-el-ele-calistirmak\/\">my Laravel production tune-up checklist<\/a>. It\u2019s worth a skim before you flip the switch.<\/p>\n<h2 id=\"section-7\"><span id=\"Rolling_Back_Without_Breaking_a_Sweat\">Rolling Back Without Breaking a Sweat<\/span><\/h2>\n<p>Nobody loves rolling back, but a calm plan beats heroics. With release directories, rollback is simply pointing <strong>current<\/strong> to the previous release. If migrations changed the schema in a breaking way, that\u2019s where the real danger lives. I avoid destructive migrations during peak hours and, when needed, I ship reversible migrations with a tested <strong>down()<\/strong> path. If you must roll back both code and schema, do it in a maintenance window or in two stages: first make the schema backward-compatible, then deploy the old code. Think of schema like rails: the train follows whatever track is down.<\/p>\n<p>When I deploy high-risk features, I\u2019ll layer in a feature flag. That way I can disable the feature instantly while leaving the code in place. Flags are a pressure valve for your nerves.<\/p>\n<h2 id=\"section-8\"><span id=\"Logs_Metrics_and_Knowing_When_Somethings_Off\">Logs, Metrics, and Knowing When Something\u2019s Off<\/span><\/h2>\n<p>What\u2019s the point of a smooth deployment if you don\u2019t know when something quietly broke? At a minimum, aggregate logs and make sure you can filter by release. Laravel\u2019s contextual logging is a lifesaver here \u2014 attach the release id via a log channel or a global scope during bootstrap and you\u2019ll thank yourself later.<\/p>\n<p>On the metrics side, I love combining system-level monitoring with app-level signals. CPU, RAM, disk IO, and 5xx rates tell you when the server is unhappy. Horizon\u2019s dashboard tells you when jobs are piling up. I shared an easy starting path in <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">my guide to getting Prometheus, Grafana, and Uptime Kuma running<\/a>. Even a simple uptime and latency alert is worth its weight in sleep.<\/p>\n<h2 id=\"section-9\"><span id=\"Security_and_TLS_Without_the_Drama\">Security and TLS Without the Drama<\/span><\/h2>\n<p>It\u2019s tempting to postpone security until &#8220;later&#8221;. Don\u2019t. Make it part of day one. Keep your OS packages patched, lock down SSH with keys, and put a firewall in front of the world. On the web stack, tighten your TLS, add sane security headers, and think about rate limits at the edge for obvious brute-force points. If you\u2019re running a WAF, tune it to be helpful instead of noisy. I\u2019ve written about how I keep WAF rules calm and fast in <a href=\"https:\/\/www.dchost.com\/blog\/en\/modsecurity-ve-owasp-crs-ile-wafi-uysallastirmak-yanlis-pozitifleri-nasil-ehlilestirir-performansi-ne-zaman-ucururuz\/\">a practical ModSecurity + OWASP CRS tuning guide<\/a>.<\/p>\n<p>And because it\u2019s related to performance and trust, give your TLS config the same love as your deploy script. If you\u2019re curious, the <a href=\"https:\/\/nginx.org\/en\/docs\/\" rel=\"nofollow noopener\" target=\"_blank\">official Nginx documentation<\/a> is a friendly reference once you have a working baseline.<\/p>\n<h2 id=\"section-10\"><span id=\"Backups_Snapshots_and_8220Oops8221_Protection\">Backups, Snapshots, and &#8220;Oops&#8221; Protection<\/span><\/h2>\n<p>If you\u2019re not backing up, you\u2019re debugging your future self\u2019s bad day. I like a mix: database dumps with retention, app files in offsite storage, and occasional VPS-level snapshots for fast restores. Encryption and versioning keep me honest. Most teams don\u2019t need enterprise tooling here \u2014 a simple, reliable flow you test once a month beats fancy dashboards you never try. If you want a friendly walkthrough, I put together <a href=\"https:\/\/www.dchost.com\/blog\/en\/restic-ve-borg-ile-s3-uyumlu-uzak-yedekleme-surumleme-sifreleme-ve-saklama-ne-zaman-nasil\/\">a guide to Restic\/Borg with S3-compatible storage<\/a> that covers versioning, encryption, and sensible retention without the drama.<\/p>\n<h2 id=\"section-11\"><span id=\"Putting_It_All_Together_A_Day_in_the_Life_of_a_Deploy\">Putting It All Together: A Day in the Life of a Deploy<\/span><\/h2>\n<p>Let\u2019s turn this into a quiet story. Your CI finishes tests and builds assets. It ships a tarball or git ref to the server. The deploy script creates a new release directory, installs composer dependencies, links the shared storage and env, warms caches, and runs migrations that were designed to be safe. Horizon gets a polite pause and terminate, then the symlink flips. Nginx keeps serving because it never lost its footing. If you reload PHP\u2011FPM, it\u2019s a blip nobody notices. Horizon comes back up and consumes new jobs with the new code. Monitoring shows a small dip, then a steady line. You get to log off at a normal hour.<\/p>\n<p>That\u2019s the whole point. You\u2019re not searching for the perfect tool as much as you\u2019re building a calm routine. Once you\u2019ve done it a couple of times, you\u2019ll wonder how you ever lived without release directories and a one-command symlink swap.<\/p>\n<h2 id=\"section-12\"><span id=\"Troubleshooting_The_Little_Things_That_Bite\">Troubleshooting: The Little Things That Bite<\/span><\/h2>\n<p>I\u2019ve learned to watch for a few quiet footguns. One, the Nginx <strong>try_files<\/strong> line must end with <strong>\/index.php?$query_string<\/strong> or you\u2019ll fight 404s for routes that should work. Two, if static assets aren\u2019t updating after deploys, your browser is probably doing its job too well. Mix\/Vite versioning fixes that with cache-busting file names. Three, if Horizon seems alive but isn\u2019t picking jobs, check the connection to your queue backend and verify the queue names match what your jobs are using. Four, if you see random 502s under load, check PHP\u2011FPM pool limits and your error logs for slow queries or stalled external calls.<\/p>\n<p>And finally, don\u2019t be shy about testing your whole deploy in a staging VPS. A dry run where you practice the symlink flip and verify the app boots is worth its weight in sanity. It\u2019s not glamorous, but neither is firefighting at midnight.<\/p>\n<h2 id=\"section-13\"><span id=\"A_Quick_Note_on_Hardware_Sizing\">A Quick Note on Hardware Sizing<\/span><\/h2>\n<p>People often ask, &#8220;How big should my VPS be?&#8221; The honest answer is &#8220;just big enough, and then an extra 20% for breathing room.&#8221; CPU for PHP\u2011FPM workers, RAM for cache and comfortable process counts, and fast NVMe for snappy IO. If you like thinking this through with real-world context, I wrote about <a href=\"https:\/\/www.dchost.com\/blog\/en\/woocommerce-laravel-ve-node-jsde-dogru-vps-kaynaklarini-nasil-secersin-cpu-ram-nvme-ve-bant-genisligi-rehberi\/\">how I choose VPS specs for Laravel, WooCommerce, and Node.js<\/a>. It covers how I read CPU, RAM, and storage signals so I don\u2019t pay for noise.<\/p>\n<h2 id=\"section-14\"><span id=\"Extras_That_Make_Life_Nicer\">Extras That Make Life Nicer<\/span><\/h2>\n<p>Once the basics are solid, you can layer in niceties. A health check endpoint that returns 200 when caches are warm and queues are flowing. Preloading classes in PHP 8.2 for faster cold starts. A release manifest in your logs so you can trace errors back to the exact build. And if you\u2019re behind a CDN, set cache-control and edge rules that actually match how your app behaves. Even small wins add up when the foundation is strong.<\/p>\n<h2 id=\"section-15\"><span id=\"Wrap-Up_The_Calm_After_the_Deploy\">Wrap-Up: The Calm After the Deploy<\/span><\/h2>\n<p>I still remember the first time I watched a Laravel app switch releases mid-traffic and nobody noticed. Not the team, not the users, not the error tracker. It was quiet, which is the highest compliment in ops. If you wire up Nginx and PHP\u2011FPM cleanly, keep queues away from the front door with Horizon, and deploy using releases with an atomic symlink, you\u2019re already in the top tier of calm deployments. Add monitoring, backups, and a few guardrails around migrations, and you\u2019ve got a setup that ages well.<\/p>\n<p>Start simple, make it boring, and grow from there. If anything here sparked a question or you want me to share a deeper dive on any step, I\u2019m all ears. Hope this playbook helps your next deploy feel like a quiet victory. See you in the next post!<\/p>\n<h3><span id=\"Further_Reading_and_Handy_Docs\">Further Reading and Handy Docs<\/span><\/h3>\n<p>For reference and deeper details, the <a href=\"https:\/\/laravel.com\/docs\/deployment\" rel=\"nofollow noopener\" target=\"_blank\">Laravel deployment docs<\/a> are a great baseline, and the <a href=\"https:\/\/nginx.org\/en\/docs\/\" rel=\"nofollow noopener\" target=\"_blank\">Nginx documentation<\/a> is surprisingly readable once you\u2019re past your first server block. If TLS tuning is on your list, my <a href=\"https:\/\/www.dchost.com\/blog\/en\/nginxte-tls-1-3-ocsp-stapling-ve-brotli-nasil-kurulur-hizli-ve-guvenli-httpsnin-sicacik-rehberi\/\">Nginx TLS and Brotli guide<\/a> is a practical companion.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>So there I was, sipping a late coffee while a client nervously asked, &#8220;Can we deploy without taking the site down?&#8221; If you\u2019ve ever watched a spinning loader during a deploy and prayed your users wouldn\u2019t notice, you\u2019re not alone. I\u2019ve been there. I\u2019ve pushed code and then stared at the terminal like it owed [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1460,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1459","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1459","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1459"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1459\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1460"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1459"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1459"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1459"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}