{"id":1453,"date":"2025-11-06T22:48:09","date_gmt":"2025-11-06T19:48:09","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/tls-1-3-ocsp-stapling-and-brotli-on-nginx-the-practical-speed-and-security-tune%e2%80%91up-i-keep-reusing\/"},"modified":"2025-11-06T22:48:09","modified_gmt":"2025-11-06T19:48:09","slug":"tls-1-3-ocsp-stapling-and-brotli-on-nginx-the-practical-speed-and-security-tune%e2%80%91up-i-keep-reusing","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/tls-1-3-ocsp-stapling-and-brotli-on-nginx-the-practical-speed-and-security-tune%e2%80%91up-i-keep-reusing\/","title":{"rendered":"TLS 1.3, OCSP Stapling and Brotli on Nginx: The Practical Speed-and-Security Tune\u2011Up I Keep Reusing"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>I still remember a Tuesday morning when a client pinged me with that classic line: \u201cThe site feels slow, but the server\u2019s barely breaking a sweat.\u201d We\u2019d already optimized PHP, tuned the database, and put a sensible cache in place. Everything looked clean on paper. Yet the first page view always felt sticky. That was the day it clicked for me\u2014so much of perceived speed is front\u2011loaded in the first few milliseconds, and you either win or lose trust right there.<\/p>\n<p>If you\u2019ve ever watched a spinner dance while your browser negotiates a secure connection, you know the feeling. Here\u2019s the thing: HTTPS isn\u2019t just a lock icon anymore. With TLS 1.3, OCSP stapling, and Brotli compression, Nginx can be both fast and reassuringly secure. In this guide, I\u2019ll walk you through how I set these up in the wild\u2014no fluff. We\u2019ll keep it conversational, add a little story, and focus on practical wins. By the end, you\u2019ll know how to enable TLS 1.3 the right way, staple OCSP so browsers stop waiting on CA servers, and ship slimmer responses with Brotli without breaking your logs or your sanity.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#What_fast_and_secure_HTTPS_really_means_and_why_your_first_byte_matters\"><span class=\"toc_number toc_depth_1\">1<\/span> What \u201cfast and secure HTTPS\u201d really means (and why your first byte matters)<\/a><\/li><li><a href=\"#TLS_13_on_Nginx_the_clean_handshake_your_users_dont_see\"><span class=\"toc_number toc_depth_1\">2<\/span> TLS 1.3 on Nginx: the clean handshake your users don\u2019t see<\/a><ul><li><a href=\"#The_minimal_sane_TLS_config_I_keep_reusing\"><span class=\"toc_number toc_depth_2\">2.1<\/span> The minimal, sane TLS config I keep reusing<\/a><\/li><li><a href=\"#Why_I_disable_session_tickets_and_when_I_dont\"><span class=\"toc_number toc_depth_2\">2.2<\/span> Why I disable session tickets (and when I don\u2019t)<\/a><\/li><li><a href=\"#About_0RTT_early_data\"><span class=\"toc_number toc_depth_2\">2.3<\/span> About 0\u2011RTT (early data)<\/a><\/li><li><a href=\"#When_browsers_still_negotiate_TLS_12\"><span class=\"toc_number toc_depth_2\">2.4<\/span> When browsers still negotiate TLS 1.2<\/a><\/li><\/ul><\/li><li><a href=\"#OCSP_Stapling_remove_the_is_your_cert_valid_detour\"><span class=\"toc_number toc_depth_1\">3<\/span> OCSP Stapling: remove the \u201cis your cert valid?\u201d detour<\/a><ul><li><a href=\"#How_I_enable_OCSP_stapling_in_Nginx\"><span class=\"toc_number toc_depth_2\">3.1<\/span> How I enable OCSP stapling in Nginx<\/a><\/li><li><a href=\"#Verifying_stapling_with_OpenSSL\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Verifying stapling with OpenSSL<\/a><\/li><\/ul><\/li><li><a href=\"#Brotli_smaller_responses_without_weird_tradeoffs\"><span class=\"toc_number toc_depth_1\">4<\/span> Brotli: smaller responses without weird trade\u2011offs<\/a><ul><li><a href=\"#Installing_the_Brotli_module\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Installing the Brotli module<\/a><\/li><li><a href=\"#Brotli_config_I_use_in_production\"><span class=\"toc_number toc_depth_2\">4.2<\/span> Brotli config I use in production<\/a><\/li><li><a href=\"#Testing_Brotli_in_the_wild\"><span class=\"toc_number toc_depth_2\">4.3<\/span> Testing Brotli in the wild<\/a><\/li><\/ul><\/li><li><a href=\"#Putting_it_together_a_tidy_Nginx_server_block_you_can_copy\"><span class=\"toc_number toc_depth_1\">5<\/span> Putting it together: a tidy Nginx server block you can copy<\/a><\/li><li><a href=\"#Testing_troubleshooting_and_small_realworld_lessons\"><span class=\"toc_number toc_depth_1\">6<\/span> Testing, troubleshooting, and small real\u2011world lessons<\/a><ul><li><a href=\"#Local_quick_wins\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Local quick wins<\/a><\/li><li><a href=\"#External_validation\"><span class=\"toc_number toc_depth_2\">6.2<\/span> External validation<\/a><\/li><li><a href=\"#When_logs_tell_a_different_story\"><span class=\"toc_number toc_depth_2\">6.3<\/span> When logs tell a different story<\/a><\/li><li><a href=\"#A_few_scars_Ive_collected\"><span class=\"toc_number toc_depth_2\">6.4<\/span> A few scars I\u2019ve collected<\/a><\/li><li><a href=\"#Do_TLS_13_OCSP_and_Brotli_play_nicely_with_CDNs_and_WAFs\"><span class=\"toc_number toc_depth_2\">6.5<\/span> Do TLS 1.3, OCSP, and Brotli play nicely with CDNs and WAFs?<\/a><\/li><\/ul><\/li><li><a href=\"#Practical_guardrails_before_you_ship\"><span class=\"toc_number toc_depth_1\">7<\/span> Practical guardrails before you ship<\/a><\/li><li><a href=\"#A_tidy_checklist_you_can_scan_before_bedtime\"><span class=\"toc_number toc_depth_1\">8<\/span> A tidy checklist you can scan before bedtime<\/a><\/li><li><a href=\"#Wrapup_small_changes_big_perceived_wins\"><span class=\"toc_number toc_depth_1\">9<\/span> Wrap\u2011up: small changes, big perceived wins<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-1\"><span id=\"What_fast_and_secure_HTTPS_really_means_and_why_your_first_byte_matters\">What \u201cfast and secure HTTPS\u201d really means (and why your first byte matters)<\/span><\/h2>\n<p>When people complain about speed, they rarely mean \u201cthe server is slow.\u201d They mean the first meaningful paint takes too long, the page feels tardy, and the initial handshake drags. So we look at the path the first request travels: DNS, TCP, TLS, then the request hits Nginx, which decides what to do next (PHP? static? cache?). Each hop is tiny, but they stack. Nail the handshake and you\u2019ve already made the site feel faster\u2014long before you render a single pixel.<\/p>\n<p>That\u2019s where TLS 1.3 comes in. Think of it as the \u201cshort version\u201d of the handshake. Fewer back\u2011and\u2011forth messages, modern ciphers that are fast and safe, and support for speedy resumption. Pair it with OCSP stapling so your server hands the browser a fresh proof that your certificate is valid (instead of sending the user\u2019s browser off to ask the certificate authority). Then finish with Brotli, which squeezes responses down more efficiently than its older cousin, gzip. It\u2019s not magic\u2014it\u2019s just removing the unnecessary waiting and waste.<\/p>\n<p>In my experience, the big wins are threefold. First, consistency: the initial connection behaves predictably under load. Second, cleanliness: configs are simpler with TLS 1.3 and fewer legacy ciphers. Third, perception: users feel the site is \u201csnappy,\u201d which is half the battle. Let\u2019s set that up on Nginx step by step.<\/p>\n<h2 id=\"section-2\"><span id=\"TLS_13_on_Nginx_the_clean_handshake_your_users_dont_see\">TLS 1.3 on Nginx: the clean handshake your users don\u2019t see<\/span><\/h2>\n<p>The fun part about TLS 1.3 is how un\u2011dramatic it is once you turn it on. You get faster handshakes, modern ciphers, and fewer footguns. I\u2019ve come to appreciate how it declutters a config file. You no longer need a long list of cipher suites or endless compatibility notes. Just one line to allow TLS 1.3 and a short list for TLS 1.2 (kept for compatibility), and you\u2019re off to the races.<\/p>\n<h3><span id=\"The_minimal_sane_TLS_config_I_keep_reusing\">The minimal, sane TLS config I keep reusing<\/span><\/h3>\n<p>Here\u2019s a trimmed example I often start with. It assumes you\u2019ve got a valid certificate and chain (fullchain) and the private key. Adjust paths for your environment, and, obviously, your server_name.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">server {\n    listen 443 ssl http2;\n    server_name example.com www.example.com;\n\n    ssl_certificate     \/etc\/letsencrypt\/live\/example.com\/fullchain.pem;\n    ssl_certificate_key \/etc\/letsencrypt\/live\/example.com\/privkey.pem;\n\n    # TLS versions: keep TLSv1.2 for compatibility, use TLSv1.3 for speed\n    ssl_protocols TLSv1.2 TLSv1.3;\n\n    # Reasonable TLSv1.2 ciphers; TLS 1.3 ciphers are not configured here (they're implicit)\n    ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:\n                 ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:\n                 ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305';\n\n    # Curve preference for ECDHE\n    ssl_ecdh_curve X25519:secp384r1;\n\n    # Session settings (resumption helps)\n    ssl_session_timeout 1d;\n    ssl_session_cache shared:SSL:50m;\n    ssl_session_tickets off;\n\n    # Security headers (adapt HSTS for your policy)\n    add_header Strict-Transport-Security &quot;max-age=31536000; includeSubDomains; preload&quot; always;\n    add_header X-Content-Type-Options nosniff;\n    add_header X-Frame-Options DENY;\n    add_header Referrer-Policy no-referrer-when-downgrade;\n\n    # ...your location blocks...\n}\n<\/code><\/pre>\n<p>That\u2019s the starter. If you\u2019re wondering about HTTP\/3 and QUIC, that\u2019s a separate Nginx build or a newer package with the quic module. It\u2019s great when you\u2019re ready for it, but you don\u2019t need it to benefit from TLS 1.3. Start with HTTP\/2, ensure stability, then move up when your stack is ready.<\/p>\n<h3><span id=\"Why_I_disable_session_tickets_and_when_I_dont\">Why I disable session tickets (and when I don\u2019t)<\/span><\/h3>\n<p>Session resumption is a big reason first visits feel faster on repeat. But there are two ways to do it: tickets and caches. I\u2019ve gotten into the habit of turning tickets off unless I\u2019m actively managing ticket keys. If you don\u2019t rotate those keys, you\u2019re not getting the security properties you think you are. The shared cache is often enough, and it\u2019s easy to reason about. If you want tickets, make sure you\u2019re rotating the keys and treating them like secrets\u2014not just another line in the config.<\/p>\n<h3><span id=\"About_0RTT_early_data\">About 0\u2011RTT (early data)<\/span><\/h3>\n<p>0\u2011RTT can make repeat connections feel instant, but it comes with replay caveats. For idempotent GETs, it\u2019s usually fine. For POSTs that write to your app, be cautious. I weigh it by app behavior. If your site is mostly static or read\u2011heavy, enabling 0\u2011RTT can be a nice bump. If you\u2019re running a checkout flow or accept sensitive writes, I skip it. You\u2019re not missing out if you stick to the basics first.<\/p>\n<h3><span id=\"When_browsers_still_negotiate_TLS_12\">When browsers still negotiate TLS 1.2<\/span><\/h3>\n<p>You\u2019ll still see TLS 1.2 in the logs. That\u2019s normal. Older clients and certain corporate environments can lag a bit. Keep TLS 1.2 around for now with a clean cipher list. As usage shifts, you can revisit and simplify further. I try not to force the upgrade unless there\u2019s a policy requirement\u2014it\u2019s better to avoid breaking someone\u2019s old but critical client at 2 AM.<\/p>\n<h2 id=\"section-3\"><span id=\"OCSP_Stapling_remove_the_is_your_cert_valid_detour\">OCSP Stapling: remove the \u201cis your cert valid?\u201d detour<\/span><\/h2>\n<p>I once tracked a weird random delay to clients that were waiting on a certificate status check. Not every user hit it, but when they did, it was a head\u2011scratch. That\u2019s OCSP in the background: the browser is checking with the certificate authority to confirm your certificate\u2019s still valid. Helpful, sure\u2014but you don\u2019t want users waiting for a CA server if you can help it.<\/p>\n<p>OCSP stapling flips that around. Your server fetches the fresh OCSP response from the CA and \u201cstaples\u201d it to the TLS handshake. The browser gets the proof immediately and moves on. It\u2019s like having your boarding pass in hand instead of lining up at the desk every time.<\/p>\n<h3><span id=\"How_I_enable_OCSP_stapling_in_Nginx\">How I enable OCSP stapling in Nginx<\/span><\/h3>\n<p>Two important notes before the config: First, make sure your certificate chain is correct\u2014use the full chain file from your CA or Let\u2019s Encrypt. Second, Nginx needs to resolve the OCSP responder\u2019s hostname, so have a working resolver set.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">server {\n    listen 443 ssl http2;\n    server_name example.com;\n\n    ssl_certificate     \/etc\/letsencrypt\/live\/example.com\/fullchain.pem;\n    ssl_certificate_key \/etc\/letsencrypt\/live\/example.com\/privkey.pem;\n\n    # OCSP stapling\n    ssl_stapling on;\n    ssl_stapling_verify on;\n\n    # Resolver for OCSP lookups\n    resolver 1.1.1.1 8.8.8.8 valid=300s;\n    resolver_timeout 5s;\n\n    # TLS versions\/ciphers as above...\n}\n<\/code><\/pre>\n<p>If stapling doesn\u2019t seem to work, it\u2019s almost always a chain or resolver issue. I\u2019ve also seen misconfigured permissions on the certificate files block Nginx from fetching OCSP responses. Check error logs; they\u2019ll usually tell you if verification failed or the responder couldn\u2019t be reached. Once it\u2019s working, you get a neat side effect: browsers stop wandering off to double\u2011check your cert mid\u2011connection.<\/p>\n<h3><span id=\"Verifying_stapling_with_OpenSSL\">Verifying stapling with OpenSSL<\/span><\/h3>\n<p>I lean on OpenSSL for quick checks. This command will show you if the server stapled an OCSP response:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">openssl s_client -connect example.com:443 -servername example.com -status &lt; \/dev\/null | sed -n &quot;\/OCSP response:\/,\/^$\/p&quot;\n<\/code><\/pre>\n<p>Look for a valid response and \u201cOK\u201d status. If it\u2019s missing, check your chain and Nginx error logs. With Let\u2019s Encrypt, renewing certs usually keeps stapling healthy, but if you change CAs or intermediate chains, re\u2011verify after deployment.<\/p>\n<h2 id=\"section-4\"><span id=\"Brotli_smaller_responses_without_weird_tradeoffs\">Brotli: smaller responses without weird trade\u2011offs<\/span><\/h2>\n<p>Back when Brotli started making noise, I tried it on a busy content site and then watched bandwidth graphs slide down like a happy ski slope. It wasn\u2019t just numbers; pages felt tighter. Images weren\u2019t touched (that\u2019s not the point), but HTML, CSS, and JS shaved off noticeable weight. It\u2019s a quiet win.<\/p>\n<p>Nginx doesn\u2019t ship Brotli in the mainline build by default in many distros. You either install a package that contains the module or compile the module and load it dynamically. The outcome is the same: enable the module, set sensible defaults, and let Brotli take the wheel on text\u2011based responses.<\/p>\n<h3><span id=\"Installing_the_Brotli_module\">Installing the Brotli module<\/span><\/h3>\n<p>On some systems, you\u2019ll find a package like nginx-module-brotli or similar. If not, you can build the module from source. The upstream module lives here: <a href=\"https:\/\/github.com\/google\/ngx_brotli\" rel=\"nofollow noopener\" target=\"_blank\">Google\u2019s ngx_brotli repository<\/a>. If compiling sounds scary, I get it\u2014go with your distro\u2019s package if it exists. Otherwise, building it once and keeping it in your config management is a manageable path. Either way, the config you use is similar.<\/p>\n<h3><span id=\"Brotli_config_I_use_in_production\">Brotli config I use in production<\/span><\/h3>\n<p>You can set Brotli in the http block and let it cascade to servers unless you need overrides. I usually keep gzip enabled as a fallback for older clients.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">http {\n    # Brotli on, gzip as fallback\n    brotli on;\n    brotli_comp_level 5;           # start at 4-6; higher is slower but smaller\n    brotli_static on;              # serve pre-compressed .br files if present\n    brotli_min_length 1024;        # skip tiny responses\n    brotli_types text\/plain text\/css text\/xml application\/javascript \n                 application\/json application\/xml application\/rss+xml \n                 image\/svg+xml application\/font-woff2;\n\n    gzip on;\n    gzip_comp_level 5;\n    gzip_min_length 1024;\n    gzip_types text\/plain text\/css text\/xml application\/javascript \n               application\/json application\/xml application\/rss+xml \n               image\/svg+xml;\n\n    # ... rest of your http config and server blocks ...\n}\n<\/code><\/pre>\n<p>If you build assets during deployment, consider pre\u2011compressing them to .br and .gz and letting Nginx serve those directly via brotli_static and gzip_static. That way, you don\u2019t pay the CPU cost per request. On dynamic pages, runtime compression still pays off nicely if you keep the compression level reasonable.<\/p>\n<h3><span id=\"Testing_Brotli_in_the_wild\">Testing Brotli in the wild<\/span><\/h3>\n<p>My go\u2011to quick check is curl with an Accept\u2011Encoding header:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">curl -I -H 'Accept-Encoding: br' https:\/\/example.com\/\n<\/code><\/pre>\n<p>Look for a Content-Encoding: br response. If you only see gzip, your module might not be loaded or your mime type isn\u2019t included. Also make sure you aren\u2019t double\u2011compressing via an upstream proxy or CDN; one layer of compression is plenty.<\/p>\n<h2 id=\"section-5\"><span id=\"Putting_it_together_a_tidy_Nginx_server_block_you_can_copy\">Putting it together: a tidy Nginx server block you can copy<\/span><\/h2>\n<p>Let\u2019s combine the pieces into a clean, practical example. This is the sort of block I keep in a repo template. Tweak domains, paths, and policy headers to your needs.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\"># In \/etc\/nginx\/nginx.conf (http block)\nhttp {\n    # Logging, timeouts, etc.\n    sendfile on;\n    keepalive_timeout 65;\n\n    # Brotli + gzip\n    brotli on;\n    brotli_comp_level 5;\n    brotli_static on;\n    brotli_min_length 1024;\n    brotli_types text\/plain text\/css text\/xml application\/javascript \n                 application\/json application\/xml application\/rss+xml \n                 image\/svg+xml application\/font-woff2;\n\n    gzip on;\n    gzip_comp_level 5;\n    gzip_min_length 1024;\n    gzip_types text\/plain text\/css text\/xml application\/javascript \n               application\/json application\/xml application\/rss+xml \n               image\/svg+xml;\n\n    # Server block(s)\n    server {\n        listen 443 ssl http2;\n        server_name example.com www.example.com;\n\n        ssl_certificate     \/etc\/letsencrypt\/live\/example.com\/fullchain.pem;\n        ssl_certificate_key \/etc\/letsencrypt\/live\/example.com\/privkey.pem;\n\n        # TLS versions\n        ssl_protocols TLSv1.2 TLSv1.3;\n\n        # TLSv1.2 ciphers (TLS 1.3 ciphers are implicit)\n        ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:\n                     ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:\n                     ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305';\n        ssl_ecdh_curve X25519:secp384r1;\n\n        ssl_session_timeout 1d;\n        ssl_session_cache shared:SSL:50m;\n        ssl_session_tickets off;\n\n        # OCSP stapling\n        ssl_stapling on;\n        ssl_stapling_verify on;\n        resolver 1.1.1.1 8.8.8.8 valid=300s;\n        resolver_timeout 5s;\n\n        # Security headers\n        add_header Strict-Transport-Security &quot;max-age=31536000; includeSubDomains; preload&quot; always;\n        add_header X-Content-Type-Options nosniff;\n        add_header X-Frame-Options DENY;\n        add_header Referrer-Policy no-referrer-when-downgrade;\n\n        root \/var\/www\/example.com\/public;\n        index index.html index.htm;\n\n        location \/ {\n            try_files $uri $uri\/ =404;\n        }\n\n        # Health and ACME challenges\n        location ^~ \/.well-known\/acme-challenge\/ {\n            root \/var\/www\/letsencrypt;\n        }\n    }\n\n    # HTTP to HTTPS redirect\n    server {\n        listen 80;\n        server_name example.com www.example.com;\n        return 301 https:\/\/$host$request_uri;\n    }\n}\n<\/code><\/pre>\n<p>If you prefer to keep things even more organized, split TLS and compression snippets into separate include files and pull them into each server block. It keeps your main config readable and makes audits a breeze.<\/p>\n<h2 id=\"section-6\"><span id=\"Testing_troubleshooting_and_small_realworld_lessons\">Testing, troubleshooting, and small real\u2011world lessons<\/span><\/h2>\n<p>Every neat config deserves a test run. I typically do three passes: local command\u2011line checks, a second opinion from an external scanner, and then a look at live user behavior once traffic flows. You don\u2019t need anything fancy\u2014just a few good habits.<\/p>\n<h3><span id=\"Local_quick_wins\">Local quick wins<\/span><\/h3>\n<p>First, verify protocol coverage. Test TLS 1.3 explicitly:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">openssl s_client -connect example.com:443 -servername example.com -tls1_3 &lt; \/dev\/null | sed -n '\/Protocol\/ p; \/Cipher\/ p'\n<\/code><\/pre>\n<p>You should see TLSv1.3 and a modern cipher like TLS_AES_128_GCM_SHA256 or CHACHA20-POLY1305. Then confirm stapling, as shown earlier, and check Brotli:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">curl -I -H 'Accept-Encoding: br' https:\/\/example.com\/\n<\/code><\/pre>\n<p>If you get gzip instead, revisit your brotli module and types. If you get neither, make sure you aren\u2019t stripping Accept\u2011Encoding with a proxy in front.<\/p>\n<h3><span id=\"External_validation\">External validation<\/span><\/h3>\n<p>When I\u2019m happy locally, I grab an opinionated audit from two sources. For a clean recommended baseline, I like the <a href=\"https:\/\/ssl-config.mozilla.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Mozilla SSL Configuration Generator<\/a> as a quick cross\u2011check of what I\u2019ve set. For a live public scan of your domain\u2019s setup, <a href=\"https:\/\/www.ssllabs.com\/ssltest\/\" rel=\"nofollow noopener\" target=\"_blank\">SSL Labs\u2019 Server Test<\/a> is the classic. They\u2019ll flag certificate chain issues, old protocols you forgot to turn off, and other gotchas that only seem to show up at 11 PM Friday night.<\/p>\n<h3><span id=\"When_logs_tell_a_different_story\">When logs tell a different story<\/span><\/h3>\n<p>Once traffic hits, glance at Nginx error logs and your access logs with TLS variables enabled (if you\u2019ve configured them) to see what percentage of clients lands on TLS 1.3 versus TLS 1.2. If a surprising chunk is stuck on 1.2, that might be your audience\u2014corporate devices, embedded browsers, or old Androids. That\u2019s fine. Your job is to be fast for everyone without breaking anyone. Brotli will still help those users because many modern browsers pick it up even if they negotiate TLS 1.2.<\/p>\n<h3><span id=\"A_few_scars_Ive_collected\">A few scars I\u2019ve collected<\/span><\/h3>\n<p>One client\u2019s staging site kept failing OCSP stapling because the staging domain wasn\u2019t publicly resolvable from the Nginx server. The resolver line looked fine, but without public DNS, OCSP fetches couldn\u2019t happen. The fix was straightforward: let the server resolve outbound over the right network and verify the chain. Another time, Brotli mysteriously didn\u2019t compress SVG files; I\u2019d forgotten to add image\/svg+xml to brotli_types. These little misses are normal\u2014keep a short checklist and you\u2019ll resolve them fast.<\/p>\n<h3><span id=\"Do_TLS_13_OCSP_and_Brotli_play_nicely_with_CDNs_and_WAFs\">Do TLS 1.3, OCSP, and Brotli play nicely with CDNs and WAFs?<\/span><\/h3>\n<p>Mostly, yes. If you terminate TLS at a CDN, your origin\u2019s TLS settings matter for the CDN\u2011to\u2011origin hop, while the CDN\u2019s edge settings control what users see. Stapling often happens at the edge, too. For security layers, your TLS config is one part of the stack; you may also want smart rules against abuse. If you\u2019ve ever wondered how I layer that without slowing things down, here\u2019s my story on <a href=\"https:\/\/www.dchost.com\/blog\/en\/waf-ve-bot-korumasi-cloudflare-modsecurity-ve-fail2bani-ayni-masada-baristirmanin-sicacik-hikayesi\/\">WAF and bot protection with Cloudflare, ModSecurity, and Fail2ban<\/a>. It pairs nicely with a tight TLS setup.<\/p>\n<h2 id=\"section-7\"><span id=\"Practical_guardrails_before_you_ship\">Practical guardrails before you ship<\/span><\/h2>\n<p>There are a few knobs people love to crank to 11 right away. Resist the urge. Compression levels above 6 sound tempting but can cost CPU on busy nodes. Start at 4 or 5. Leave 0\u2011RTT off if your app mixes writes with GET traffic, or add guard logic to detect replay. Keep TLS 1.2 on for compatibility unless you\u2019re certain your audience doesn\u2019t need it. And when you add HSTS with preload, be sure you want every subdomain locked to HTTPS for a long time\u2014that header sticks around in browsers.<\/p>\n<p>Finally, commit your TLS and compression settings to version control. When something odd happens later, having history gives you context. I like dropping a short comment above each block\u2014just enough to remind future\u2011you why a line is there.<\/p>\n<h2 id=\"section-8\"><span id=\"A_tidy_checklist_you_can_scan_before_bedtime\">A tidy checklist you can scan before bedtime<\/span><\/h2>\n<p>Let me wrap this up with a mental checklist I run each time:<\/p>\n<p>First, does TLS 1.3 negotiate for modern clients, and do older ones land safely on TLS 1.2 without fuss? Second, is OCSP stapling active with verification on, and is my resolver healthy? Third, is Brotli actually compressing the types I care about, with gzip as a fallback? Fourth, do my headers match policy\u2014HSTS, nosniff, frame options, referrer policy\u2014without breaking embedded content I rely on? And finally, are my logs clean, my external scans happy, and my CPU usage steady under load?<\/p>\n<p>When those boxes are ticked, the site just feels right. Pages pop faster, and that \u201cis this connection safe?\u201d pause disappears. You probably won\u2019t hear a thank\u2011you for the milliseconds you saved\u2014but you\u2019ll notice the support inbox staying quiet, and that\u2019s the best compliment.<\/p>\n<h2 id=\"section-9\"><span id=\"Wrapup_small_changes_big_perceived_wins\">Wrap\u2011up: small changes, big perceived wins<\/span><\/h2>\n<p>If there\u2019s a theme to this whole journey, it\u2019s that first impressions matter online as much as in person. TLS 1.3 shortens the hello. OCSP stapling keeps your guests from wandering off to check paperwork. Brotli lightens the bags so the trip feels easier. None of these is a silver bullet on its own, but together they remove the invisible friction that users sense even if they can\u2019t name it.<\/p>\n<p>My advice is simple: implement the basics cleanly, verify with a couple of good tools, and keep an eye on behavior rather than just scores. Use the <a href=\"https:\/\/ssl-config.mozilla.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Mozilla SSL Configuration Generator<\/a> as a sanity check, confirm with <a href=\"https:\/\/www.ssllabs.com\/ssltest\/\" rel=\"nofollow noopener\" target=\"_blank\">SSL Labs\u2019 Server Test<\/a>, and then watch your logs for real\u2011world signals. Don\u2019t over\u2011tune on day one; settle into settings that are easy to maintain. And if you\u2019re building out your broader security posture, pair this with thoughtful edge rules and WAF policies so your speed gains don\u2019t invite chaos.<\/p>\n<p>Hope this was helpful! If you try this setup and run into a head\u2011scratcher, drop me a note\u2014there\u2019s always a small detail we can untangle together. See you in the next post.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>I still remember a Tuesday morning when a client pinged me with that classic line: \u201cThe site feels slow, but the server\u2019s barely breaking a sweat.\u201d We\u2019d already optimized PHP, tuned the database, and put a sensible cache in place. Everything looked clean on paper. Yet the first page view always felt sticky. That was [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1454,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1453","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1453","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1453"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1453\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1454"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1453"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1453"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1453"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}