{"id":1519,"date":"2025-11-07T22:33:03","date_gmt":"2025-11-07T19:33:03","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/how-i-host-node-js-in-production-without-drama-pm2-systemd-nginx-ssl-and-zero%e2%80%91downtime-deploys\/"},"modified":"2025-11-07T22:33:03","modified_gmt":"2025-11-07T19:33:03","slug":"how-i-host-node-js-in-production-without-drama-pm2-systemd-nginx-ssl-and-zero%e2%80%91downtime-deploys","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/how-i-host-node-js-in-production-without-drama-pm2-systemd-nginx-ssl-and-zero%e2%80%91downtime-deploys\/","title":{"rendered":"How I Host Node.js in Production Without Drama: PM2\/Systemd, Nginx, SSL, and Zero\u2011Downtime Deploys"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>So there I was one Thursday night, sipping cold coffee and waiting for a Node.js app to \u201cjust restart\u201d after a deploy. You know that hold-your-breath moment when you hit restart, stare at the terminal, and silently promise the internet you\u2019ll never cowboy deploy again? Yeah, that one. The site blinked out for a few seconds, a customer pinged me on chat, and I felt that sting I\u2019ve felt a dozen times before: we could do better. That was the night I got serious about a no-drama production setup for Node\u2014PM2 or systemd to supervise the process, Nginx in front like a polite bouncer, TLS locked down, and a deploy routine that doesn\u2019t shrug and hope.<\/p>\n<p>Ever had that moment when the app works great on your laptop but collapses in the wild? Or you\u2019re stuck deciding between PM2 and systemd, doubting your Nginx config, or dreading the next deploy? In this friendly tour, I\u2019ll walk you through the exact mental model and toolkit I keep reusing. We\u2019ll set expectations, wire Nginx as a reverse proxy, grab clean SSL, and build a truly zero\u2011downtime release flow\u2014without turning your server into a science project. I\u2019ll share the small decisions that save weekends, the gotchas that bit me, and the little habits that make production feel boring in the best possible way.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#The_Mental_Model_App_Proxy_Pipeline\"><span class=\"toc_number toc_depth_1\">1<\/span> The Mental Model: App, Proxy, Pipeline<\/a><ul><li><a href=\"#Where_everything_lives_and_why_it_matters\"><span class=\"toc_number toc_depth_2\">1.1<\/span> Where everything lives (and why it matters)<\/a><\/li><li><a href=\"#Ports_sockets_and_the_trust_boundary\"><span class=\"toc_number toc_depth_2\">1.2<\/span> Ports, sockets, and the trust boundary<\/a><\/li><\/ul><\/li><li><a href=\"#PM2_vs_Systemd_Pick_Your_Style_Both_Work\"><span class=\"toc_number toc_depth_1\">2<\/span> PM2 vs Systemd: Pick Your Style (Both Work)<\/a><ul><li><a href=\"#When_PM2_shines\"><span class=\"toc_number toc_depth_2\">2.1<\/span> When PM2 shines<\/a><\/li><li><a href=\"#When_systemd_is_your_steady_friend\"><span class=\"toc_number toc_depth_2\">2.2<\/span> When systemd is your steady friend<\/a><\/li><li><a href=\"#Graceful_shutdown_the_one_habit_that_saves_you\"><span class=\"toc_number toc_depth_2\">2.3<\/span> Graceful shutdown: the one habit that saves you<\/a><\/li><li><a href=\"#PM2_quickstart_I_keep_coming_back_to\"><span class=\"toc_number toc_depth_2\">2.4<\/span> PM2 quickstart I keep coming back to<\/a><\/li><li><a href=\"#Systemd_unit_I_trust_as_a_baseline\"><span class=\"toc_number toc_depth_2\">2.5<\/span> Systemd unit I trust as a baseline<\/a><\/li><\/ul><\/li><li><a href=\"#Nginx_Reverse_Proxy_The_Quiet_Hero\"><span class=\"toc_number toc_depth_1\">3<\/span> Nginx Reverse Proxy: The Quiet Hero<\/a><ul><li><a href=\"#Why_a_reverse_proxy_makes_life_easier\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Why a reverse proxy makes life easier<\/a><\/li><li><a href=\"#A_production-ready_Nginx_snippet\"><span class=\"toc_number toc_depth_2\">3.2<\/span> A production-ready Nginx snippet<\/a><\/li><\/ul><\/li><li><a href=\"#Real_HTTPS_Lets_Encrypt_TLS_13_and_HSTS_without_the_Panic\"><span class=\"toc_number toc_depth_1\">4<\/span> Real HTTPS: Let\u2019s Encrypt, TLS 1.3, and HSTS without the Panic<\/a><ul><li><a href=\"#Getting_a_certificate_the_easy_way\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Getting a certificate the easy way<\/a><\/li><li><a href=\"#Polish_that_helps_in_production\"><span class=\"toc_number toc_depth_2\">4.2<\/span> Polish that helps in production<\/a><\/li><\/ul><\/li><li><a href=\"#ZeroDowntime_Deploys_Releases_Symlinks_and_Safe_Rollbacks\"><span class=\"toc_number toc_depth_1\">5<\/span> Zero\u2011Downtime Deploys: Releases, Symlinks, and Safe Rollbacks<\/a><ul><li><a href=\"#That_first_time_you_deploy_without_a_blip\"><span class=\"toc_number toc_depth_2\">5.1<\/span> That first time you deploy without a blip<\/a><\/li><li><a href=\"#The_release_layout_I_reuse_everywhere\"><span class=\"toc_number toc_depth_2\">5.2<\/span> The release layout I reuse everywhere<\/a><\/li><li><a href=\"#A_tiny_deploy_script_for_PM2\"><span class=\"toc_number toc_depth_2\">5.3<\/span> A tiny deploy script for PM2<\/a><\/li><li><a href=\"#Doing_it_with_systemd_the_calm_way\"><span class=\"toc_number toc_depth_2\">5.4<\/span> Doing it with systemd (the calm way)<\/a><\/li><li><a href=\"#Health_checks_and_rollbacks\"><span class=\"toc_number toc_depth_2\">5.5<\/span> Health checks and rollbacks<\/a><\/li><\/ul><\/li><li><a href=\"#Logs_Monitoring_and_Other_Quiet_Superpowers\"><span class=\"toc_number toc_depth_1\">6<\/span> Logs, Monitoring, and Other Quiet Superpowers<\/a><ul><li><a href=\"#Logs_you_actually_read\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Logs you actually read<\/a><\/li><li><a href=\"#Uptime_and_metrics_without_drama\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Uptime and metrics without drama<\/a><\/li><li><a href=\"#Security_and_the_little_locks_that_matter\"><span class=\"toc_number toc_depth_2\">6.3<\/span> Security and the little locks that matter<\/a><\/li><\/ul><\/li><li><a href=\"#Putting_It_All_Together_A_Calm_Runbook\"><span class=\"toc_number toc_depth_1\">7<\/span> Putting It All Together: A Calm Runbook<\/a><ul><li><a href=\"#A_day_in_the_life_of_your_production_stack\"><span class=\"toc_number toc_depth_2\">7.1<\/span> A day in the life of your production stack<\/a><\/li><li><a href=\"#Where_to_nudge_for_speed\"><span class=\"toc_number toc_depth_2\">7.2<\/span> Where to nudge for speed<\/a><\/li><\/ul><\/li><li><a href=\"#Small_Gotchas_I_Learned_the_Hard_Way\"><span class=\"toc_number toc_depth_1\">8<\/span> Small Gotchas I Learned the Hard Way<\/a><ul><li><a href=\"#Headers_and_real_client_IPs\"><span class=\"toc_number toc_depth_2\">8.1<\/span> Headers and real client IPs<\/a><\/li><li><a href=\"#WebSockets_and_the_secret_handshake\"><span class=\"toc_number toc_depth_2\">8.2<\/span> WebSockets and the secret handshake<\/a><\/li><li><a href=\"#Body_sizes_and_timeouts\"><span class=\"toc_number toc_depth_2\">8.3<\/span> Body sizes and timeouts<\/a><\/li><li><a href=\"#Graceful_shutdown_really_is_the_secret_sauce\"><span class=\"toc_number toc_depth_2\">8.4<\/span> Graceful shutdown really is the secret sauce<\/a><\/li><\/ul><\/li><li><a href=\"#WrapUp_A_Calm_Friendly_Production_Stack_Youll_Reuse\"><span class=\"toc_number toc_depth_1\">9<\/span> Wrap\u2011Up: A Calm, Friendly Production Stack You\u2019ll Reuse<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"The_Mental_Model_App_Proxy_Pipeline\">The Mental Model: App, Proxy, Pipeline<\/span><\/h2>\n<h3><span id=\"Where_everything_lives_and_why_it_matters\">Where everything lives (and why it matters)<\/span><\/h3>\n<p>When I think about a production setup, I picture three layers having a quiet conversation. The app is your Node process running on an internal port or a Unix socket\u2014simple, focused, and kept safe from the public internet. Then you have Nginx out front, speaking HTTP\/2 or HTTP\/3 to the world, terminating TLS, handling keep-alives, and passing requests along with the right headers. Finally, there\u2019s the deployment pipeline\u2014your releases, logs, secrets, and all the guardrails that make changes feel uneventful.<\/p>\n<p>It all clicks when you remember that Nginx is the gatekeeper. It knows how to do TLS well, it\u2019s comfortable proxying WebSockets, and it\u2019s happy to retry if a backend stumbles for a second. Your Node app, meanwhile, can concentrate on what it does best\u2014serving your business logic\u2014and let a process manager keep it alive. In practice that means either PM2 or systemd will watch the process, auto-start on boot, and make sure memory blips don\u2019t become outages.<\/p>\n<h3><span id=\"Ports_sockets_and_the_trust_boundary\">Ports, sockets, and the trust boundary<\/span><\/h3>\n<p>Here\u2019s the thing: exposing your Node app directly on the public internet is like leaving a half-closed front door. I\u2019ve done it in pinch situations, but it never feels right. Instead, bind your app to 127.0.0.1:3000 or to a Unix socket file that only Nginx can access. Nginx then becomes the boundary where TLS happens, where client IPs are recorded correctly, and where you can add rate limiting or caching later without re-architecting your app. It\u2019s not glamorous, but it\u2019s the kind of foundation that saves you from late-night debugging.<\/p>\n<h2 id=\"section-2\"><span id=\"PM2_vs_Systemd_Pick_Your_Style_Both_Work\">PM2 vs Systemd: Pick Your Style (Both Work)<\/span><\/h2>\n<h3><span id=\"When_PM2_shines\">When PM2 shines<\/span><\/h3>\n<p>I reach for PM2 when I want a tiny bit of Node-flavored magic. It\u2019s dead simple to start, supports cluster mode out of the box, and has a reload command that actually replaces workers gracefully. If you\u2019ve ever tried to keep multiple Node processes in sync across deploys, you\u2019ll appreciate just typing <strong>pm2 reload<\/strong> and watching the handoff happen with no downtime. The <a href=\"https:\/\/pm2.keymetrics.io\/\" rel=\"nofollow noopener\" target=\"_blank\">PM2 documentation<\/a> is refreshingly practical, and for a lot of solo projects and small teams, PM2 covers 95% of what you need.<\/p>\n<h3><span id=\"When_systemd_is_your_steady_friend\">When systemd is your steady friend<\/span><\/h3>\n<p>On the other hand, systemd is already on your server, and it\u2019s battle-tested. I like systemd when I want fewer moving parts, tight integration with the OS, tidy logs, and straightforward resource controls. It\u2019s the boring choice, but boring is a compliment in production. The trade-off? You don\u2019t get PM2\u2019s cluster orchestration built-in. You can still do zero\u2011downtime deploys with systemd\u2014think blue\/green units or socket activation\u2014but it\u2019s a touch more DIY. If you\u2019re already running a platform with consistent systemd units, it\u2019s an elegant fit.<\/p>\n<h3><span id=\"Graceful_shutdown_the_one_habit_that_saves_you\">Graceful shutdown: the one habit that saves you<\/span><\/h3>\n<p>No matter which supervisor you choose, teach your app to leave the stage politely. That means catching <strong>SIGTERM<\/strong> and <strong>SIGINT<\/strong>, stopping new requests, and letting in-flight requests finish. Without this, restarts can murder active connections. With it, the difference is night and day.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">const http = require('http');\nconst express = require('express');\nconst app = express();\n\n\/\/ Your routes here\napp.get('\/healthz', (req, res) =&gt; res.status(200).send('ok'));\n\nconst server = http.createServer(app);\nserver.listen(process.env.PORT || 3000, '127.0.0.1', () =&gt; {\n  console.log('Server listening');\n});\n\n\/\/ Graceful shutdown\nlet shuttingDown = false;\n\nfunction shutDown() {\n  if (shuttingDown) return;\n  shuttingDown = true;\n  console.log('Received signal, shutting down gracefully...');\n  server.close(err =&gt; {\n    if (err) {\n      console.error('Error during shutdown', err);\n      process.exit(1);\n    }\n    console.log('Closed out remaining connections');\n    process.exit(0);\n  });\n\n  \/\/ Optional hard timeout in case something hangs\n  setTimeout(() =&gt; process.exit(1), 10000).unref();\n}\n\nprocess.on('SIGTERM', shutDown);\nprocess.on('SIGINT', shutDown);\n<\/code><\/pre>\n<p>If you want the deeper why and how behind signals, the <a href=\"https:\/\/nodejs.org\/api\/process.html#signal-events\" rel=\"nofollow noopener\" target=\"_blank\">Node.js signal events<\/a> page is a nice quick read. The gist is: let the process manager signal the app, and let your app exit naturally once it\u2019s safe.<\/p>\n<h3><span id=\"PM2_quickstart_I_keep_coming_back_to\">PM2 quickstart I keep coming back to<\/span><\/h3>\n<p>Here\u2019s a tiny ecosystem file I reuse. It sets your app to cluster mode (using all available CPU cores), caps memory, and plays nicely with graceful reloads:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">\/\/ ecosystem.config.js\nmodule.exports = {\n  apps: [\n    {\n      name: 'myapp',\n      script: '.\/server.js',\n      exec_mode: 'cluster',\n      instances: 'max',\n      env: {\n        NODE_ENV: 'production',\n        PORT: 3000\n      },\n      max_memory_restart: '500M',\n      listen_timeout: 8000,\n      kill_timeout: 5000\n    }\n  ]\n};\n<\/code><\/pre>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Start, save, and enable startup on boot\npm2 start ecosystem.config.js\npm2 save\npm2 startup\n\n# Later, deploy updates with zero downtime\npm2 reload myapp\n<\/code><\/pre>\n<p>In my experience, this fits small to medium apps beautifully. You get multi-core performance and a reload that swaps workers without dropping connections. It\u2019s the happy path for a lot of teams.<\/p>\n<h3><span id=\"Systemd_unit_I_trust_as_a_baseline\">Systemd unit I trust as a baseline<\/span><\/h3>\n<p>Prefer systemd? This unit covers the essentials: environment, working directory, restart policy, and lifecycle:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># \/etc\/systemd\/system\/myapp.service\n[Unit]\nDescription=My Node.js App\nAfter=network.target\n\n[Service]\nType=simple\nWorkingDirectory=\/var\/www\/myapp\/current\nExecStart=\/usr\/bin\/node server.js\nRestart=always\nRestartSec=2\nEnvironment=NODE_ENV=production\nEnvironment=PORT=3000\n# Optional: tune file descriptors\nLimitNOFILE=65535\n\n# Let Node handle graceful shutdown on SIGTERM\nKillSignal=SIGTERM\nTimeoutStopSec=15\n\n[Install]\nWantedBy=multi-user.target\n<\/code><\/pre>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">systemctl daemon-reload\nsystemctl enable --now myapp\n# Redeploys typically do:\nsystemctl restart myapp\n<\/code><\/pre>\n<p>With systemd, zero\u2011downtime depends on your strategy. A simple restart can be effectively seamless if your app is quick to boot and shuts down gracefully. If you need true never-a-blip deploys under heavy load, you can add a blue\/green pattern with two templated units, then switch traffic at Nginx. It\u2019s more steps, but it\u2019s rock solid once scripted.<\/p>\n<h2 id=\"section-3\"><span id=\"Nginx_Reverse_Proxy_The_Quiet_Hero\">Nginx Reverse Proxy: The Quiet Hero<\/span><\/h2>\n<h3><span id=\"Why_a_reverse_proxy_makes_life_easier\">Why a reverse proxy makes life easier<\/span><\/h3>\n<p>Think of Nginx as your venue security: checks IDs (TLS), keeps the line moving (keep-alive, HTTP\/2), and quietly handles weirdness (timeouts, buffering). Your Node app doesn\u2019t have to juggle TLS, static assets, and upstream headers all on its own. Nginx is light, reliable, and happily runs forever in the background.<\/p>\n<h3><span id=\"A_production-ready_Nginx_snippet\">A production-ready Nginx snippet<\/span><\/h3>\n<p>This is the configuration I\u2019d paste into a fresh server and sleep well. It handles WebSockets, preserves real client IPs, and sets sane timeouts. You\u2019ll bolt TLS onto this in the next section.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\"># \/etc\/nginx\/conf.d\/myapp.conf\nupstream myapp_upstream {\n    server 127.0.0.1:3000;\n    keepalive 64;\n}\n\nserver {\n    listen 80;\n    listen [::]:80;\n    server_name example.com www.example.com;\n\n    # ACME challenge for Let's Encrypt\n    location \/.well-known\/acme-challenge\/ {\n        root \/var\/www\/letsencrypt;\n    }\n\n    location \/ {\n        return 301 https:\/\/$host$request_uri;\n    }\n}\n\nserver {\n    listen 443 ssl http2;\n    listen [::]:443 ssl http2;\n    server_name example.com www.example.com;\n\n    # ssl_certificate and ssl_certificate_key will be added by Certbot (next section)\n\n    # If you serve static assets directly\n    location \/assets\/ {\n        alias \/var\/www\/myapp\/current\/public\/assets\/;\n        access_log off;\n        expires 7d;\n    }\n\n    location \/ {\n        proxy_pass http:\/\/myapp_upstream;\n        proxy_http_version 1.1;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n\n        # WebSocket support\n        proxy_set_header Upgrade $http_upgrade;\n        proxy_set_header Connection &quot;upgrade&quot;;\n\n        # Timeouts and buffering\n        proxy_connect_timeout 5s;\n        proxy_send_timeout 30s;\n        proxy_read_timeout 60s;\n        send_timeout 60s;\n\n        # Optional: tune buffering for large payloads\n        client_max_body_size 20m;\n    }\n}\n<\/code><\/pre>\n<p>If you\u2019re keen to squeeze every drop of performance, enabling HTTP\/2 and even HTTP\/3 with QUIC is a treat. I walked through that in more detail in <a href=\"https:\/\/www.dchost.com\/blog\/en\/nginx-ve-cloudflareda-http-2-ve-http-3-quic-nasil-etkinlestirilir-wordpress-icin-uctan-uca-kurulum-ve-test-rehberi\/\">the end-to-end guide to HTTP\/2 and HTTP\/3 on Nginx + Cloudflare<\/a>, and the same mindset maps nicely to Node backends.<\/p>\n<h2 id=\"section-4\"><span id=\"Real_HTTPS_Lets_Encrypt_TLS_13_and_HSTS_without_the_Panic\">Real HTTPS: Let\u2019s Encrypt, TLS 1.3, and HSTS without the Panic<\/span><\/h2>\n<h3><span id=\"Getting_a_certificate_the_easy_way\">Getting a certificate the easy way<\/span><\/h3>\n<p>I remember doing certs by hand years ago with OpenSSL commands sprawled across my notes. These days, Let\u2019s Encrypt and Certbot make it feel like autopilot. You point Certbot at Nginx, it validates via a quick HTTP challenge, and then it wires the certificates into your server blocks automatically. If you haven\u2019t tried it yet, the <a href=\"https:\/\/certbot.eff.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Certbot guide<\/a> is blissfully straightforward.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># On Ubuntu\/Debian\napt-get update\napt-get install -y certbot python3-certbot-nginx\n\n# Obtain and install certs for your domain\ncertbot --nginx -d example.com -d www.example.com\n\n# Test renewal\ncertbot renew --dry-run\n<\/code><\/pre>\n<p>Once you\u2019ve got TLS in place, you can raise the bar with TLS 1.3, OCSP stapling, and HSTS. I won\u2019t drag you through cipher lists here, but if you want a friendly deep dive, I wrote up the exact playbook I reuse in <a href=\"https:\/\/www.dchost.com\/blog\/en\/tls-1-3-ve-modern-sifrelerin-sicacik-mutfagi-nginx-apachede-ocsp-stapling-hsts-preload-ve-pfs-nasil-kurulur\/\">TLS 1.3 Without Tears: OCSP Stapling, HSTS Preload, and PFS on Nginx\/Apache<\/a>. It\u2019s the kind of setup that keeps modern browsers happy and keeps you off the SSL error screens.<\/p>\n<h3><span id=\"Polish_that_helps_in_production\">Polish that helps in production<\/span><\/h3>\n<p>A couple small touches pay off over time. Redirect HTTP to HTTPS cleanly. Serve static assets with caching headers (just enough to be useful, not enough to trap stale files forever). If you\u2019re uploading bigger payloads\u2014images, CSVs\u2014nudge <strong>client_max_body_size<\/strong> to something comfortable. And always keep a health endpoint, like <strong>\/healthz<\/strong>, that returns quickly even under load. Nginx can bypass auth and route to it directly, which makes uptime checks painless.<\/p>\n<h2 id=\"section-5\"><span id=\"ZeroDowntime_Deploys_Releases_Symlinks_and_Safe_Rollbacks\">Zero\u2011Downtime Deploys: Releases, Symlinks, and Safe Rollbacks<\/span><\/h2>\n<h3><span id=\"That_first_time_you_deploy_without_a_blip\">That first time you deploy without a blip<\/span><\/h3>\n<p>One of my clients once DM\u2019d me a skeptical \u201cDid you deploy? I didn\u2019t feel anything.\u201d That\u2019s the level we\u2019re aiming for. The recipe that works for me is simple: each release gets its own folder, we point a <strong>current<\/strong> symlink at the active one, and the process manager reloads gracefully. If something smells off, flip the symlink back and reload again. No fishing in Git history at 2 a.m., no \u201cwhat build is on the server?\u201d mysteries.<\/p>\n<h3><span id=\"The_release_layout_I_reuse_everywhere\">The release layout I reuse everywhere<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">\/var\/www\/myapp\/\n  releases\/\n    2024-08-19-1500\/\n    2024-08-20-0910\/\n  current -&gt; \/var\/www\/myapp\/releases\/2024-08-20-0910\n  shared\/\n    .env\n    uploads\/\n<\/code><\/pre>\n<p>During deploy, rsync the new build into a fresh release folder, link <strong>shared<\/strong> paths like uploads and your <strong>.env<\/strong>, run any migrations, and only then switch <strong>current<\/strong> to the new release. That last step takes a blink. After the swap, reload the app. With PM2, that\u2019s a clean <strong>pm2 reload<\/strong>. With systemd, a restart paired with graceful shutdown is often near-seamless, especially if your app boots fast.<\/p>\n<h3><span id=\"A_tiny_deploy_script_for_PM2\">A tiny deploy script for PM2<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Assume you run this from your CI\/CD runner\nset -euo pipefail\n\nAPP=myapp\nROOT=\/var\/www\/$APP\nRELEASES=$ROOT\/releases\nRELEASE=$(date +%Y-%m-%d-%H%M%S)\nNEW=$RELEASES\/$RELEASE\n\nssh myserver &quot;mkdir -p $RELEASES $ROOT\/shared&quot;\nrsync -az --delete .\/dist\/ myserver:$NEW\/\nssh myserver &quot;ln -sfn $ROOT\/shared\/.env $NEW\/.env; ln -sfn $NEW $ROOT\/current; cd $ROOT\/current &amp;&amp; pm2 startOrReload ecosystem.config.js --only $APP&quot;\n<\/code><\/pre>\n<p>That last command is the magic: <strong>startOrReload<\/strong> means PM2 will reload workers if the app is already running, or start it if it\u2019s not, all without dropping traffic. If you want a fuller walkthrough, I wrote a detailed, <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>-friendly recipe in <a href=\"https:\/\/www.dchost.com\/blog\/en\/vpse-sifir-kesinti-ci-cd-nasil-kurulur-rsync-sembolik-surumler-ve-systemd-ile-sicacik-bir-yolculuk\/\">my zero\u2011downtime CI\/CD guide with rsync, symlinks, and systemd<\/a>. The ideas carry over 1:1 to Node.js.<\/p>\n<h3><span id=\"Doing_it_with_systemd_the_calm_way\">Doing it with systemd (the calm way)<\/span><\/h3>\n<p>With systemd, you can keep a single unit and rely on your app\u2019s graceful shutdown, or go the extra mile with blue\/green units. The simple path looks like this: build release, flip symlink, <strong>systemctl restart myapp<\/strong>. If your app starts quickly and your Nginx proxy has short keepalive timeouts, most users won\u2019t notice a thing. If you need ironclad zero\u2011downtime under heavy load, spin up two units\u2014<strong>myapp@blue<\/strong> and <strong>myapp@green<\/strong>\u2014have Nginx point to both upstreams, warm the new one, then drain the old one, and finally stop it. It\u2019s extra ceremony, but entirely scriptable.<\/p>\n<h3><span id=\"Health_checks_and_rollbacks\">Health checks and rollbacks<\/span><\/h3>\n<p>Set up a health check endpoint that tells the truth\u2014ideally confirming your app can speak to its database and critical services. After a deploy, hit it a few times via Nginx, not just localhost. If anything feels off, flip the symlink back and reload. Rollbacks should be boring. When they are, you\u2019re free to be brave with small, frequent releases instead of fear-driven big bangs.<\/p>\n<p>If you\u2019re still mapping out your end-to-end dev-to-live motion, I shared my go-to routines in <a href=\"https:\/\/www.dchost.com\/blog\/en\/gelistirme-staging-canli-yolculugu-wordpress-ve-laravelde-sifir-kesinti-dagitim-nasil-gercekten-olur\/\">the no\u2011stress dev\u2013staging\u2013production workflow<\/a>. Different stack, same calm principles.<\/p>\n<h2 id=\"section-6\"><span id=\"Logs_Monitoring_and_Other_Quiet_Superpowers\">Logs, Monitoring, and Other Quiet Superpowers<\/span><\/h2>\n<h3><span id=\"Logs_you_actually_read\">Logs you actually read<\/span><\/h3>\n<p>PM2 writes its own logs; systemd streams to the journal. Either way, keep things tidy and searchable. Don\u2019t dump chatty debug logs into production unless you\u2019re in the middle of a known incident. I like JSON logs for services that funnel into a centralized tool, but plain text is fine if you\u2019re tailing locally. The point is consistency. Make sure your errors land where you\u2019ll actually see them, not just in a folder you swear you\u2019ll check later.<\/p>\n<h3><span id=\"Uptime_and_metrics_without_drama\">Uptime and metrics without drama<\/span><\/h3>\n<p>I\u2019m a big fan of pragmatic monitoring. A simple uptime checker hitting <strong>\/healthz<\/strong> every 30 seconds is better than a thousand unmonitored dashboards. When you\u2019re ready to grow up your observability stack, <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">this getting-started guide to Prometheus, Grafana, and Uptime Kuma<\/a> shows how I wire up alerts and graphs without turning it into a second job. Tie alerts to the same channels you actually read, and don\u2019t drown yourself in noise. Calm, useful monitoring beats aggressive, ignored monitoring every day of the week.<\/p>\n<h3><span id=\"Security_and_the_little_locks_that_matter\">Security and the little locks that matter<\/span><\/h3>\n<p>Don\u2019t forget the basics. Open only the ports you need (80\/443 for Nginx, maybe SSH on a custom port). Keep Node behind Nginx on localhost. Rotate secrets\u2014.env files belong in your <strong>shared<\/strong> folder, not baked into builds or Git. If your app processes uploads, scan and validate them. And of course, patch your system occasionally; boring maintenance is better than exciting incidents.<\/p>\n<h2 id=\"section-7\"><span id=\"Putting_It_All_Together_A_Calm_Runbook\">Putting It All Together: A Calm Runbook<\/span><\/h2>\n<h3><span id=\"A_day_in_the_life_of_your_production_stack\">A day in the life of your production stack<\/span><\/h3>\n<p>Let\u2019s walk through a normal day. Your app boots under PM2 or systemd, bound to 127.0.0.1:3000. Nginx sits in front, handling TLS and proxying requests with the right headers. A user lands on your homepage over HTTP\/2, Nginx keeps the connection warm, and your Node app does its thing, logging to a stream you\u2019ll actually read later. If a worker hiccups, PM2 restarts it; if the process crashes, systemd brings it back. You sleep.<\/p>\n<p>Later, a deploy comes along. CI builds into a fresh release folder, links shared files, flips the <strong>current<\/strong> symlink, and triggers a reload. PM2 rotates workers without dropping requests. Systemd restarts with a graceful shutdown hook, and Nginx politely retries. Your uptime alert sits there quietly, because there\u2019s nothing to report. Meanwhile, TLS certs renew themselves behind the scenes\u2014another job you\u2019re no longer doing at 1 a.m.<\/p>\n<p>If you want to go further on TLS polish, I\u2019ve got a handy checklist in <a href=\"https:\/\/www.dchost.com\/blog\/en\/nginxte-tls-1-3-ocsp-stapling-ve-brotli-nasil-kurulur-hizli-ve-guvenli-httpsnin-sicacik-rehberi\/\">my practical TLS 1.3 + Brotli tune\u2011up for Nginx<\/a>. It pairs perfectly with the Nginx block we sketched earlier.<\/p>\n<h3><span id=\"Where_to_nudge_for_speed\">Where to nudge for speed<\/span><\/h3>\n<p>You don\u2019t need to over-optimize from day one, but there are easy wins. Serve static files directly from Nginx. Cache unchanging assets for a few days. Put your health checks on a separate path that\u2019s fast by design. If you rely on external APIs, set sensible timeouts and fallbacks so a slow vendor doesn\u2019t freeze your entire app. And keep an eye on memory\u2014restart policies are your safety net, not a symptom of failure.<\/p>\n<h2 id=\"section-8\"><span id=\"Small_Gotchas_I_Learned_the_Hard_Way\">Small Gotchas I Learned the Hard Way<\/span><\/h2>\n<h3><span id=\"Headers_and_real_client_IPs\">Headers and real client IPs<\/span><\/h3>\n<p>Make sure your app trusts Nginx as a proxy and reads the right headers. In Express, that\u2019s <strong>app.set(&#8216;trust proxy&#8217;, 1)<\/strong> when you\u2019re behind Nginx. Otherwise, you\u2019ll think every visitor is 127.0.0.1 and your rate limits or logs will be way off.<\/p>\n<h3><span id=\"WebSockets_and_the_secret_handshake\">WebSockets and the secret handshake<\/span><\/h3>\n<p>WebSockets need the <strong>Upgrade<\/strong> and <strong>Connection<\/strong> headers set. Forgetting those is a classic \u201cworks locally, dies in prod\u201d moment. The Nginx snippet above takes care of it.<\/p>\n<h3><span id=\"Body_sizes_and_timeouts\">Body sizes and timeouts<\/span><\/h3>\n<p>Uploads bigger than a few megabytes? Bump <strong>client_max_body_size<\/strong> on Nginx and make sure your Node parsing middleware isn\u2019t too strict either. Also, ensure your Node server has reasonable timeouts. Hanging sockets can look like random slowness for users.<\/p>\n<h3><span id=\"Graceful_shutdown_really_is_the_secret_sauce\">Graceful shutdown really is the secret sauce<\/span><\/h3>\n<p>I\u2019ll repeat this because it cured so many phantom bugs for me: listen for SIGTERM, stop accepting new connections, let in-flight requests finish, and exit. PM2 reloads and systemd restarts go from scary to boring the day you wire this in.<\/p>\n<h2 id=\"section-9\"><span id=\"WrapUp_A_Calm_Friendly_Production_Stack_Youll_Reuse\">Wrap\u2011Up: A Calm, Friendly Production Stack You\u2019ll Reuse<\/span><\/h2>\n<p>There\u2019s no one true way to host a Node app in production, but there is a way to make it feel calm. Let Nginx guard the door and speak TLS fluently. Let PM2 or systemd keep your process upright. Teach your app to leave the stage gracefully. Deploy with symlinked releases so rollbacks are a snap. And keep a humble health check that tells you, in plain language, that everything is OK.<\/p>\n<p>If you want to keep building from here, layer in monitoring and a few simple alerts. Make your TLS strong but not fussy. And don\u2019t be afraid to practice your deploy and rollback routine on a quiet afternoon; it\u2019s amazing how much confidence you get when you\u2019ve rehearsed. If you\u2019re curious about tightening the bolts further, check out <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-izleme-ve-alarm-kurulumu-prometheus-grafana-ve-uptime-kuma-ile-baslangic\/\">my beginner-friendly monitoring setup<\/a> and the earlier links on TLS and HTTP\/3. I hope this guide saves you from at least one midnight restart. If it did, I\u2019ll count that as a win. See you in the next post, and may your deploys be pleasantly boring.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>So there I was one Thursday night, sipping cold coffee and waiting for a Node.js app to \u201cjust restart\u201d after a deploy. You know that hold-your-breath moment when you hit restart, stare at the terminal, and silently promise the internet you\u2019ll never cowboy deploy again? Yeah, that one. The site blinked out for a few [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1520,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1519","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1519","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1519"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1519\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1520"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1519"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1519"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1519"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}