{"id":1659,"date":"2025-11-10T22:23:43","date_gmt":"2025-11-10T19:23:43","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/so-why-mtls-a-story-about-trust-between-machines\/"},"modified":"2025-11-10T22:23:43","modified_gmt":"2025-11-10T19:23:43","slug":"so-why-mtls-a-story-about-trust-between-machines","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/so-why-mtls-a-story-about-trust-between-machines\/","title":{"rendered":"So, Why mTLS? A Story About Trust Between Machines"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>{<br \/>\n  &#8220;title&#8221;: &#8220;The Calm Way to mTLS: How I Set Up Certificate Verification on Nginx and Caddy for APIs and Microservices&#8221;,<br \/>\n  &#8220;content&#8221;: &#8220;<\/p>\n<p>Ever had that moment when your logs show a mysterious spike in API traffic at 3 a.m., but nothing seems strange on the app side? I remember a night like that with a client\u2019s microservice stack. Everything looked \u201cokay,\u201d yet requests were slipping through using a token that <strong>shouldn\u2019t have worked<\/strong> from where they came from. We had rate limits, we had API keys, we even had IP allowlists. Still, it didn\u2019t feel like the API knew who was talking to it. It felt like a club checking tickets but not IDs.<\/p>\n<p>That\u2019s when mTLS clicked for me\u2014mutual TLS, where both sides show their credentials. It\u2019s a two-way handshake: the server proves who it is, and the client proves who it is, too. Suddenly, your API isn\u2019t just \u201copen with a password.\u201d It\u2019s a conversation between verified identities. And if you design it right, it\u2019s surprisingly calm to manage.<\/p>\n<p>In this guide, I\u2019ll walk you through setting up mTLS on both Nginx and Caddy\u2014two web servers I use a lot for APIs and microservices. We\u2019ll talk about issuing client certificates, verifying them at the edge, passing identity through to backends, doing mTLS on upstream connections, rotating certs without drama, and testing your setup without losing your weekend. I\u2019ll share real-world tips that saved me headaches, and yes, some configs you can copy and tweak right away.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#How_mTLS_Fits_in_the_Real_World_And_Why_Its_Not_as_Scary_as_It_Sounds\"><span class=\"toc_number toc_depth_1\">1<\/span> How mTLS Fits in the Real World (And Why It\u2019s Not as Scary as It Sounds)<\/a><\/li><li><a href=\"#Before_You_Start_Issuing_Client_Certificates_the_Friendly_Way\"><span class=\"toc_number toc_depth_1\">2<\/span> Before You Start: Issuing Client Certificates the Friendly Way<\/a><ul><li><a href=\"#Quick_demo_CA_and_client_certs_with_OpenSSL\"><span class=\"toc_number toc_depth_2\">2.1<\/span> Quick demo CA and client certs with OpenSSL<\/a><\/li><li><a href=\"#How_clients_call_your_mTLS_API\"><span class=\"toc_number toc_depth_2\">2.2<\/span> How clients call your mTLS API<\/a><\/li><\/ul><\/li><li><a href=\"#Nginx_Verifying_Client_Certificates_at_the_Edge\"><span class=\"toc_number toc_depth_1\">3<\/span> Nginx: Verifying Client Certificates at the Edge<\/a><ul><li><a href=\"#Basic_Nginx_server_block_for_mTLS\"><span class=\"toc_number toc_depth_2\">3.1<\/span> Basic Nginx server block for mTLS<\/a><\/li><li><a href=\"#Allowlisting_particular_clients_or_teams\"><span class=\"toc_number toc_depth_2\">3.2<\/span> Allowlisting particular clients or teams<\/a><\/li><li><a href=\"#Passing_identity_downstream_safely\"><span class=\"toc_number toc_depth_2\">3.3<\/span> Passing identity downstream, safely<\/a><\/li><li><a href=\"#mTLS_from_Nginx_to_the_upstream_service\"><span class=\"toc_number toc_depth_2\">3.4<\/span> mTLS from Nginx to the upstream service<\/a><\/li><li><a href=\"#Logging_and_visibility_your_future_self_will_thank_you\"><span class=\"toc_number toc_depth_2\">3.5<\/span> Logging and visibility: your future self will thank you<\/a><\/li><\/ul><\/li><li><a href=\"#Caddy_Clean_Composable_mTLS_With_Minimal_Fuss\"><span class=\"toc_number toc_depth_1\">4<\/span> Caddy: Clean, Composable mTLS With Minimal Fuss<\/a><ul><li><a href=\"#Basic_Caddyfile_for_incoming_mTLS\"><span class=\"toc_number toc_depth_2\">4.1<\/span> Basic Caddyfile for incoming mTLS<\/a><\/li><li><a href=\"#mTLS_to_your_upstream_from_Caddy\"><span class=\"toc_number toc_depth_2\">4.2<\/span> mTLS to your upstream from Caddy<\/a><\/li><\/ul><\/li><li><a href=\"#Rotating_Certificates_Without_Drama_The_Practical_Playbook\"><span class=\"toc_number toc_depth_1\">5<\/span> Rotating Certificates Without Drama (The Practical Playbook)<\/a><ul><li><a href=\"#1_Use_shortlived_certs_and_automate_issuance\"><span class=\"toc_number toc_depth_2\">5.1<\/span> 1) Use short\u2011lived certs and automate issuance<\/a><\/li><li><a href=\"#2_Store_and_deliver_keys_like_theyre_production_data\"><span class=\"toc_number toc_depth_2\">5.2<\/span> 2) Store and deliver keys like they\u2019re production data<\/a><\/li><li><a href=\"#3_For_CA_rotation_trust_two_CAs_during_the_handover\"><span class=\"toc_number toc_depth_2\">5.3<\/span> 3) For CA rotation, trust two CAs during the handover<\/a><\/li><li><a href=\"#4_Reloads_without_downtime\"><span class=\"toc_number toc_depth_2\">5.4<\/span> 4) Reloads without downtime<\/a><\/li><li><a href=\"#5_Revoke_fast_and_log_it\"><span class=\"toc_number toc_depth_2\">5.5<\/span> 5) Revoke fast, and log it<\/a><\/li><\/ul><\/li><li><a href=\"#How_to_Test_Without_Losing_Your_Weekend\"><span class=\"toc_number toc_depth_1\">6<\/span> How to Test Without Losing Your Weekend<\/a><ul><li><a href=\"#Sanity_checks_with_curl\"><span class=\"toc_number toc_depth_2\">6.1<\/span> Sanity checks with curl<\/a><\/li><li><a href=\"#Check_the_whole_chain\"><span class=\"toc_number toc_depth_2\">6.2<\/span> Check the whole chain<\/a><\/li><li><a href=\"#Make_the_app_aware_of_identity\"><span class=\"toc_number toc_depth_2\">6.3<\/span> Make the app aware of identity<\/a><\/li><\/ul><\/li><li><a href=\"#Design_Tips_I_Wish_Someone_Had_Told_Me\"><span class=\"toc_number toc_depth_1\">7<\/span> Design Tips I Wish Someone Had Told Me<\/a><\/li><li><a href=\"#A_Real-World_Pattern_Secure_Webhooks_and_Internal_Calls\"><span class=\"toc_number toc_depth_1\">8<\/span> A Real-World Pattern: Secure Webhooks and Internal Calls<\/a><\/li><li><a href=\"#A_Quick_Nudge_mTLS_for_Admin_Panels\"><span class=\"toc_number toc_depth_1\">9<\/span> A Quick Nudge: mTLS for Admin Panels<\/a><\/li><li><a href=\"#Common_Gotchas_And_the_Tiny_Fixes\"><span class=\"toc_number toc_depth_1\">10<\/span> Common Gotchas (And the Tiny Fixes)<\/a><\/li><li><a href=\"#Wrapup_Make_Your_Services_Shake_Hands_Not_Just_Wave\"><span class=\"toc_number toc_depth_1\">11<\/span> Wrap\u2011up: Make Your Services Shake Hands, Not Just Wave<\/a><\/li><\/ul><\/div>\n<h2 id=\"section-2\"><span id=\"How_mTLS_Fits_in_the_Real_World_And_Why_Its_Not_as_Scary_as_It_Sounds\">How mTLS Fits in the Real World (And Why It\u2019s Not as Scary as It Sounds)<\/span><\/h2>\n<p>Here\u2019s the thing about mTLS: it\u2019s not a silver bullet for everything. But for service-to-service trust or critical endpoints\u2014think payment webhooks, internal APIs, admin interfaces\u2014it\u2019s rock solid. Instead of trusting a shared secret floating around, you\u2019re leaning on certificates signed by a CA you control. It\u2019s identity with receipts.<\/p>\n<p>Think of it like a VIP section at a venue. Your server SSL cert is the venue\u2019s neon sign saying \u201cThis is the real place.\u201d The client cert is the wristband, signed by the same organizer. The bouncer (your proxy) checks both, and if the wristband\u2019s fake or from another event, it\u2019s a polite but firm \u201cnot tonight.\u201d<\/p>\n<p>In my experience, mTLS shines when you want three things at once: you want to <strong>know exactly who is calling your API<\/strong>, you want to be able to <strong>revoke access fast<\/strong> (by disabling a cert), and you want to <strong>avoid secret sprawl<\/strong> in environment variables and CI systems. It\u2019s especially nice in microservice architectures where services talk a lot, because you can hand out short-lived certs and automate renewals.<\/p>\n<h2 id=\"section-3\"><span id=\"Before_You_Start_Issuing_Client_Certificates_the_Friendly_Way\">Before You Start: Issuing Client Certificates the Friendly Way<\/span><\/h2>\n<p>You need a CA (Certificate Authority) to issue client certificates your server will trust. You can go simple and generate a quick CA with OpenSSL for a proof of concept. For production, I like tooling that makes rotation painless and gives you an audit trail\u2014things like a private CA service or an internal PKI. If you\u2019re starting fresh, <a href=\"https:\/\/smallstep.com\/docs\/step-ca\/\" rel=\"nofollow noopener\" target=\"_blank\">Smallstep\u2019s step-ca<\/a> is a really friendly way to get short\u2011lived client certs into your services without turning your ops into paperwork.<\/p>\n<h3><span id=\"Quick_demo_CA_and_client_certs_with_OpenSSL\">Quick demo CA and client certs with OpenSSL<\/span><\/h3>\n<p>Here\u2019s a fast way to stand up a demo CA and one client certificate. It\u2019s not perfect PKI hygiene, but it\u2019s enough to get mTLS working locally or in a lab.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># 1) Create a simple root CA (demo only!)\nopenssl req -x509 -newkey rsa:4096 -sha256 -days 3650 \\\n  -keyout ca.key -out ca.crt -nodes -subj &quot;\/CN=Demo mTLS CA&quot;\n\n# 2) Create a client key + CSR\nopenssl req -newkey rsa:2048 -keyout client.key -out client.csr -nodes \\\n  -subj &quot;\/CN=checkout-service\/O=payments&quot;\n\n# 3) Sign the client cert with your CA\nopenssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial \\\n  -out client.crt -days 365 -sha256\n\n# You now have: ca.crt (trusted CA), client.crt (client cert), client.key (client key)\n<\/code><\/pre>\n<p>One tip: put something meaningful in the subject. I like using the CN as the service name and O as a team or domain of responsibility. If you\u2019re deep into zero-trust, consider using a SPIFFE ID in the SAN. Whatever you choose, stay consistent\u2014future you will thank you during incident response.<\/p>\n<h3><span id=\"How_clients_call_your_mTLS_API\">How clients call your mTLS API<\/span><\/h3>\n<p>From a client\u2019s point of view, nothing is wild here. They just present their cert and key over TLS. A quick check with curl tells you if the basics are right:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">curl --cert client.crt --key client.key https:\/\/api.example.com\/health\n<\/code><\/pre>\n<p>If you get a 200, you\u2019re on your way. If you get blocked, we\u2019ll walk through the most common reasons in a bit.<\/p>\n<h2 id=\"section-4\"><span id=\"Nginx_Verifying_Client_Certificates_at_the_Edge\">Nginx: Verifying Client Certificates at the Edge<\/span><\/h2>\n<p>Nginx is a dependable doorman for mTLS. You tell it which CA to trust and whether to demand a client cert, and it handles the verification. Then you can pass identity downstream with headers, restrict access, or even enforce per\u2011endpoint policies.<\/p>\n<h3><span id=\"Basic_Nginx_server_block_for_mTLS\">Basic Nginx server block for mTLS<\/span><\/h3>\n<p>This example assumes you already have a normal server certificate for your domain (from Let\u2019s Encrypt or similar). We\u2019re adding client verification on top.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">server {\n    listen 443 ssl http2;\n    server_name api.example.com;\n\n    # Your normal server TLS certs\n    ssl_certificate     \/etc\/ssl\/certs\/api.crt;\n    ssl_certificate_key \/etc\/ssl\/private\/api.key;\n\n    # Trust this CA for client certs\n    ssl_client_certificate \/etc\/ssl\/ca\/clients-ca.crt;\n    ssl_verify_client on;           # require a valid client cert\n    ssl_verify_depth 2;\n\n    # Optional: friendlier error for client cert issues\n    error_page 495 496 = @mtls_failed;\n\n    # Pass client identity to upstreams\n    proxy_set_header X-Client-Verify $ssl_client_verify;\n    proxy_set_header X-Client-DN     $ssl_client_s_dn;\n    proxy_set_header X-Client-Cert   $ssl_client_cert;\n\n    location \/ {\n        # If you like explicit checks...\n        if ($ssl_client_verify != SUCCESS) { return 403; }\n        proxy_pass http:\/\/api_backend;\n    }\n\n    location @mtls_failed {\n        return 403 &quot;Client certificate required or invalid.&quot;;\n    }\n}\n\nupstream api_backend {\n    server 127.0.0.1:8080;\n}\n<\/code><\/pre>\n<p>At this point, your API requires a client cert signed by <code>clients-ca.crt<\/code>. Nginx verifies the chain and depth, so if you\u2019re using an intermediate CA, make sure the file contains the full chain needed for verification.<\/p>\n<h3><span id=\"Allowlisting_particular_clients_or_teams\">Allowlisting particular clients or teams<\/span><\/h3>\n<p>Let\u2019s say you want only your \u201cpayments\u201d clients to hit a certain endpoint. You can extract the subject DN and map it to a simple allow\/deny flag. I often start with something simple like this and move to a registry or database policy later.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\"># Put this at http{} level or before the server block\nmap $ssl_client_s_dn $client_allowed {\n    default 0;\n    ~*CN=checkout-service.*O=payments 1;  # allow payments team checkout client\n}\n\nserver {\n    # ... same TLS setup as above ...\n\n    location \/payments\/ {\n        if ($client_allowed = 0) { return 403; }\n        proxy_pass http:\/\/payments_backend;\n    }\n}\n<\/code><\/pre>\n<p>Pro tip: if you embed identifiers in SANs instead of the subject, use the variables Nginx exposes for SANs or pass the raw cert to your app and inspect it there. Just be sure you <strong>never<\/strong> trust headers from the outside world for identity\u2014only the ones you set after TLS verification.<\/p>\n<h3><span id=\"Passing_identity_downstream_safely\">Passing identity downstream, safely<\/span><\/h3>\n<p>When you proxy, you can pass the verified identity to your app as headers. I like setting these:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">proxy_set_header X-Client-Verify $ssl_client_verify;\nproxy_set_header X-Client-DN     $ssl_client_s_dn;\nproxy_set_header X-Client-Cert   $ssl_client_cert;   # PEM, may be large\nproxy_set_header X-Client-FP     $ssl_client_fingerprint;\n<\/code><\/pre>\n<p>On the app side, log these on auth success\/fail. It\u2019s gold during investigations and helps you build strong audit trails.<\/p>\n<h3><span id=\"mTLS_from_Nginx_to_the_upstream_service\">mTLS from Nginx to the upstream service<\/span><\/h3>\n<p>Zero\u2011trust doesn\u2019t stop at the edge. If your backend is also behind TLS and requires client certs, Nginx can present a client certificate as well and verify the upstream\u2019s identity.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">upstream secure_backend {\n    server 10.0.0.12:8443;\n    keepalive 32;\n}\n\nserver {\n    # ...incoming mTLS setup...\n\n    location \/internal\/ {\n        proxy_pass https:\/\/secure_backend;\n        proxy_http_version 1.1;\n        proxy_set_header Connection &quot;&quot;;\n        proxy_ssl_server_name on;\n        proxy_ssl_name backend.internal.example;\n\n        # Verify upstream server cert against your internal CA\n        proxy_ssl_trusted_certificate \/etc\/ssl\/ca\/services-ca.crt;\n        proxy_ssl_verify on;\n        proxy_ssl_verify_depth 2;\n\n        # Present our client certificate to the upstream\n        proxy_ssl_certificate     \/etc\/ssl\/certs\/gateway-client.crt;\n        proxy_ssl_certificate_key \/etc\/ssl\/private\/gateway-client.key;\n\n        # Optional: restrict ciphers\/TLS versions\n        proxy_ssl_protocols TLSv1.2 TLSv1.3;\n    }\n}\n<\/code><\/pre>\n<p>Couple of small but important details here. First, set <code>proxy_ssl_server_name on;<\/code> so SNI matches what the upstream expects. Second, use a <code>proxy_ssl_trusted_certificate<\/code> that contains the CA your upstream\u2019s cert chains to. If you get a verification error, nine times out of ten it\u2019s the chain or the SNI.<\/p>\n<h3><span id=\"Logging_and_visibility_your_future_self_will_thank_you\">Logging and visibility: your future self will thank you<\/span><\/h3>\n<p>I like adding a log format that captures certificate bits. It\u2019s invaluable when you\u2019re troubleshooting access checks or tracing requests across services.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">log_format mtls '$remote_addr - $remote_user [$time_local] '\n                 '&quot;$request&quot; $status $body_bytes_sent '\n                 '&quot;$http_referer&quot; &quot;$http_user_agent&quot; '\n                 'client_verify=$ssl_client_verify '\n                 'client_dn=&quot;$ssl_client_s_dn&quot; '\n                 'client_fp=$ssl_client_fingerprint';\n\naccess_log \/var\/log\/nginx\/api_mtls.log mtls;\n<\/code><\/pre>\n<p>Just remember that the full PEM can be huge, so I stick to DN and fingerprint in logs, and pass the PEM only to the app when needed.<\/p>\n<h2 id=\"section-5\"><span id=\"Caddy_Clean_Composable_mTLS_With_Minimal_Fuss\">Caddy: Clean, Composable mTLS With Minimal Fuss<\/span><\/h2>\n<p>Caddy has a knack for making TLS configurations feel simple. If you\u2019ve used it for HTTPS automation, you\u2019ll probably enjoy how it handles client authentication too. The core idea is the same: tell Caddy which CA to trust for client certificates, choose a mode (I use <strong>require_and_verify<\/strong> for APIs), and optionally forward identity to your app.<\/p>\n<h3><span id=\"Basic_Caddyfile_for_incoming_mTLS\">Basic Caddyfile for incoming mTLS<\/span><\/h3>\n<p>Here\u2019s a straightforward example. Caddy will serve your site certificate and require clients to present a valid cert signed by your CA. The exact shape of the <em>client_auth<\/em> block can vary slightly by version, so if something looks off in your environment, peek at the docs for your release.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">api.example.com {\n    # Caddy manages the server cert automatically if DNS is public\n    # For private\/internal, you can use 'tls internal' or provide certs manually.\n\n    tls {\n        client_auth {\n            mode require_and_verify\n            # Trust this CA for client certs (PEM)\n            # In many Caddy versions you can specify a file:\n            # trusted_ca_cert_file \/etc\/ssl\/ca\/clients-ca.crt\n            # Or provide the PEM inline with 'trusted_ca_cert'\n            trusted_ca_cert -----BEGIN CERTIFICATE-----\nMIID...your CA cert...\n-----END CERTIFICATE-----\n        }\n    }\n\n    @health path \/health\n    handle @health {\n        respond 200 &quot;ok&quot;\n    }\n\n    handle {\n        header_up X-Client-Verify {tls_client_verified}\n        header_up X-Client-DN {tls_client_subject}\n        header_up X-Client-FP {tls_client_fingerprint}\n        reverse_proxy 127.0.0.1:8080\n    }\n}\n<\/code><\/pre>\n<p>Caddy exposes handy placeholders like <code>{tls_client_verified}<\/code> and <code>{tls_client_subject}<\/code> that you can forward to your app. If you only want mTLS on certain routes, put the <code>client_auth<\/code> block in a site that\u2019s bound to those routes or use separate sites by hostname\u2014Caddy\u2019s routing is flexible.<\/p>\n<h3><span id=\"mTLS_to_your_upstream_from_Caddy\">mTLS to your upstream from Caddy<\/span><\/h3>\n<p>If your backend expects client certs, Caddy can present one as it proxies. You\u2019ll also want to verify the upstream with your internal CA. This keeps the trust story consistent end to end.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">internal-api.example.com {\n    tls {\n        client_auth {\n            mode require_and_verify\n            trusted_ca_cert_file \/etc\/ssl\/ca\/clients-ca.crt\n        }\n    }\n\n    handle {\n        reverse_proxy https:\/\/backend.internal.example:8443 {\n            transport http {\n                tls\n                tls_server_name backend.internal.example\n\n                # Trust the upstream's CA\n                root_ca \/etc\/ssl\/ca\/services-ca.crt\n\n                # Present our client cert to the upstream\n                client_certificate \/etc\/ssl\/certs\/gateway-client.crt \/etc\/ssl\/private\/gateway-client.key\n            }\n        }\n    }\n}\n<\/code><\/pre>\n<p>In some versions, the directive names for CA trust and client certs are slightly different, so I always keep the <a href=\"https:\/\/caddyserver.com\/docs\/caddyfile\/directives\/tls#client-authentication\" rel=\"nofollow noopener\" target=\"_blank\">Caddy TLS client authentication docs<\/a> handy. The essence doesn\u2019t change: set the trusted CA, require verification, and present a client certificate when going upstream.<\/p>\n<h2 id=\"section-6\"><span id=\"Rotating_Certificates_Without_Drama_The_Practical_Playbook\">Rotating Certificates Without Drama (The Practical Playbook)<\/span><\/h2>\n<p>Rotation is where mTLS projects either age gracefully or become a weekly fire drill. Here\u2019s the rhythm that\u2019s worked for me, even in busy environments.<\/p>\n<h3><span id=\"1_Use_shortlived_certs_and_automate_issuance\">1) Use short\u2011lived certs and automate issuance<\/span><\/h3>\n<p>Short\u2011lived certs give you natural key rotation. If your CA supports it, aim for days, not months. The good news is that good tooling makes this feel automatic. If you haven\u2019t picked a system yet, explore something like <a href=\"https:\/\/smallstep.com\/docs\/step-ca\/\" rel=\"nofollow noopener\" target=\"_blank\">step\u2011ca<\/a> or a managed internal PKI that integrates with your deployment pipeline.<\/p>\n<h3><span id=\"2_Store_and_deliver_keys_like_theyre_production_data\">2) Store and deliver keys like they\u2019re production data<\/span><\/h3>\n<p>Private keys deserve the same care you give secrets in your database\u2014because they are secrets. In containers, use read\u2011only mounts where possible. In VMs, restrict permissions to the user running the proxy. A tiny permissions fix now saves you from a big incident later.<\/p>\n<h3><span id=\"3_For_CA_rotation_trust_two_CAs_during_the_handover\">3) For CA rotation, trust two CAs during the handover<\/span><\/h3>\n<p>When rotating a CA, create a bundle that contains both the old and new CA and point your proxies to it. Nginx and Caddy will accept either chain during that period. Roll out new client certs signed by the new CA, then remove the old one after you\u2019re confident everything is flipped.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Example: trust bundle with two CAs\ncat new-ca.crt old-ca.crt &gt; clients-ca-bundle.crt\n\n# Point Nginx\/Caddy to clients-ca-bundle.crt during rotation window\n<\/code><\/pre>\n<p>On the client side, you can do the same trick for trusting servers while you shift your server cert chains.<\/p>\n<h3><span id=\"4_Reloads_without_downtime\">4) Reloads without downtime<\/span><\/h3>\n<p>Reloading Nginx to pick up new certs is a single, graceful command that doesn\u2019t drop connections. Caddy watches file changes and often hot\u2011reloads automatically. Either way, schedule rotations in predictable windows, monitor, and document the steps so anyone on the team can do it calmly.<\/p>\n<h3><span id=\"5_Revoke_fast_and_log_it\">5) Revoke fast, and log it<\/span><\/h3>\n<p>Sometimes you just need to switch off a client. If you\u2019re using short\u2011lived certs, simply not renewing is enough. For longer-lived ones, use your CA\u2019s revocation or maintain a denylist\/allowlist in your proxy based on fingerprints. In critical cases, cut the CA and rotate to a new one. It sounds scary, but with bundles and staged rollouts, it\u2019s a measured move\u2014not a panic button.<\/p>\n<h2 id=\"section-7\"><span id=\"How_to_Test_Without_Losing_Your_Weekend\">How to Test Without Losing Your Weekend<\/span><\/h2>\n<p>Testing mTLS feels intimidating until you turn it into a simple checklist. I keep a little routine that has saved me from silly mistakes more times than I can count.<\/p>\n<h3><span id=\"Sanity_checks_with_curl\">Sanity checks with curl<\/span><\/h3>\n<p>First, confirm the server enforces client certs. Without a cert, you should get bounced.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Should be 403 or a custom error\ncurl -i https:\/\/api.example.com\/health\n<\/code><\/pre>\n<p>Then try again with your client cert and key.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Should be 200 now\ncurl -i --cert client.crt --key client.key https:\/\/api.example.com\/health\n<\/code><\/pre>\n<p>If you\u2019re working with an internal CA, don\u2019t forget to trust it when verifying upstreams. For curl, that\u2019s the <code>--cacert<\/code> option.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">curl -i --cert client.crt --key client.key \\\n  --cacert ca.crt https:\/\/api.example.com\/health\n<\/code><\/pre>\n<h3><span id=\"Check_the_whole_chain\">Check the whole chain<\/span><\/h3>\n<p>If anything fails, validate the certificate chain. Is the intermediate in the right file? Does the client cert\u2019s chain lead to the CA your proxy trusts? With Nginx, look closely at <code>ssl_client_certificate<\/code>. If you\u2019re unsure, see the official notes on client cert verification in the <a href=\"https:\/\/nginx.org\/en\/docs\/http\/ngx_http_ssl_module.html#ssl_client_certificate\" rel=\"nofollow noopener\" target=\"_blank\">Nginx SSL module docs<\/a>. It\u2019s the single page I revisit most when things act weird.<\/p>\n<h3><span id=\"Make_the_app_aware_of_identity\">Make the app aware of identity<\/span><\/h3>\n<p>Return the client DN or fingerprint in a special test route so you see what the app receives. That takes the guesswork out of whether your headers are populated.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\"># Example pseudo-output from a \/debug\/identity endpoint:\n{\n  &quot;client_verify&quot;: &quot;SUCCESS&quot;,\n  &quot;client_dn&quot;: &quot;CN=checkout-service,O=payments&quot;,\n  &quot;client_fp&quot;: &quot;F1:32:...:9B&quot;\n}\n<\/code><\/pre>\n<h2 id=\"section-8\"><span id=\"Design_Tips_I_Wish_Someone_Had_Told_Me\">Design Tips I Wish Someone Had Told Me<\/span><\/h2>\n<p>A few things I\u2019ve picked up the hard way, offered to save you from the same detours.<\/p>\n<p>First, decide where \u201cauthorization\u201d really lives. mTLS gives you strong authentication, but you still need to decide <strong>what<\/strong> a client can do. I like letting the gateway do broad allow\/deny based on certificate identity, and then letting the app enforce resource-level rules. That way, even if you mess up an allowlist, the app still has final say.<\/p>\n<p>Second, name certificates like a librarian. If your CN says \u201cservice\u201142\u201d today and \u201csvc\u201142\u2011green\u201d tomorrow, your logs and maps will look like alphabet soup. Pick a pattern\u2014CN as service name, O as team, SAN with a stable ID\u2014and stick with it. Even better, store those mappings in code or config, not in someone\u2019s head.<\/p>\n<p>Third, don\u2019t ignore the operational side: dashboards and alerts. Track counts of mTLS failures per route, fingerprints that get denied repeatedly, and success rates over time. When you see an odd pattern, you catch misconfigurations before your users do.<\/p>\n<p>And finally, consider scope. Not every endpoint needs mTLS. Sometimes it\u2019s perfect for admin, internal, or webhook endpoints, while the public API still uses tokens for customers. Mix and match intentionally.<\/p>\n<h2 id=\"section-9\"><span id=\"A_Real-World_Pattern_Secure_Webhooks_and_Internal_Calls\">A Real-World Pattern: Secure Webhooks and Internal Calls<\/span><\/h2>\n<p>One of my favorite uses for mTLS is securing inbound webhooks from a trusted partner or internal event bus. You can issue a client cert specifically for the webhook sender, verify it at the edge, and then validate payload signatures as a second layer. That gives you a belt\u2011and\u2011suspenders approach, and it\u2019s very resilient against spoofing. Add rate limits and you\u2019ve got a tidy little fortress.<\/p>\n<p>Same idea applies to microservices talking within a cluster. The gateway requires client certs from services. Services that call upstreams present their own client certs. Every hop is mutually verified. If you\u2019re moving to that model gradually, start with the most sensitive flows, then widen the circle as your certificate automation matures.<\/p>\n<h2 id=\"section-10\"><span id=\"A_Quick_Nudge_mTLS_for_Admin_Panels\">A Quick Nudge: mTLS for Admin Panels<\/span><\/h2>\n<p>While we\u2019re here, mTLS is one of the calmest ways to lock down admin interfaces\u2014panels, dashboards, anything you never want exposed to the open internet. If that topic is on your list, I wrote a step\u2011by\u2011step story about it: <a href=\"https:\/\/www.dchost.com\/blog\/en\/yonetim-panellerini-mtls-ile-nasil-kale-gibi-korursun-nginxte-istemci-sertifikalari-adim-adim\/\">protecting admin panels with mTLS on Nginx<\/a>. It builds on a lot of the same ideas and is a natural companion to this guide.<\/p>\n<h2 id=\"section-11\"><span id=\"Common_Gotchas_And_the_Tiny_Fixes\">Common Gotchas (And the Tiny Fixes)<\/span><\/h2>\n<p>Here are the bumps I stumble over most often, and how I smooth them out.<\/p>\n<p>If your client gets a 400\/403 and you\u2019re sure you passed a cert, check the chain file you set on the server. With Nginx, <code>ssl_client_certificate<\/code> must include intermediates if your client cert is signed by one. With Caddy, make sure your trusted CA matches the issuer exactly. When in doubt, rebuild the chain and test with <code>openssl verify -CAfile<\/code>.<\/p>\n<p>If upstream mTLS fails, look at SNI. A mismatch between <code>proxy_ssl_name<\/code> and the upstream\u2019s certificate SAN will cause verification errors even if the CA is correct. I learned that one the hard way at 2 a.m. once.<\/p>\n<p>If logs look empty, double\u2011check your placeholders or variables. In Nginx, passing the raw PEM as a header can hit size limits; I prefer the fingerprint and DN. In Caddy, use the built\u2011in placeholders and ensure your reverse_proxy block forwards them with <code>header_up<\/code>.<\/p>\n<p>If performance worries you, remember that TLS handshakes are the expensive part. Keep connections alive between your proxy and upstreams, and use HTTP\/2 where it makes sense. For public endpoints, your users won\u2019t notice the difference once connections are warm.<\/p>\n<h2 id=\"section-12\"><span id=\"Wrapup_Make_Your_Services_Shake_Hands_Not_Just_Wave\">Wrap\u2011up: Make Your Services Shake Hands, Not Just Wave<\/span><\/h2>\n<p>When you boil it down, mTLS is about making sure each side knows who it\u2019s talking to\u2014no guesswork, no hoping a shared secret wasn\u2019t leaked somewhere last quarter. Nginx and Caddy both make it surprisingly straightforward once you have a clean way to issue and rotate certificates. Start with a small slice of your system, like a webhook or an internal endpoint. Practice issuing a cert, verifying it at the edge, and passing identity through to your app. Then layer in upstream mTLS, rotation, and observability.<\/p>\n<p>If you\u2019re ever unsure, walk it back to basics: does the server trust the right CA, does the client present the right cert, and does the chain link up? With that mindset and a couple of handy commands, you\u2019ll go from \u201cwhy is this failing?\u201d to \u201coh, it\u2019s just the chain\u201d in minutes instead of hours.<\/p>\n<p>Hope this was helpful. If you try this and hit a weird edge case, write it down\u2014you\u2019ll probably save someone else a long night later. See you in the next post, and may your certs always be valid and your rotations blissfully boring.<\/p>\n<p>&#8220;,<br \/>\n  &#8220;focus_keyword&#8221;: &#8220;mTLS on Nginx and Caddy&#8221;,<br \/>\n  &#8220;meta_description&#8221;: &#8220;Set up mTLS on Nginx and Caddy for APIs and microservices. Friendly steps for issuing client certs, safe rotation, upstream verification, and easy testing.&#8221;,<br \/>\n  &#8220;faqs&#8221;: [<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;Do I need a public CA for client certificates, or should I run my own?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;Great question! For client certs, you typically run your own internal CA. That way you control who gets a cert, how long it lives, and how to revoke it. Public CAs are perfect for server certificates on public domains, but client auth is a private trust relationship, and an internal CA makes rotation and access control much calmer.&#8221;<br \/>\n    },<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;How do I roll out mTLS gradually without breaking everything?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;Start with a single endpoint\u2014ideally internal or a high\u2011value webhook. Issue one client cert, enforce mTLS there, and pass identity to your app so you can log it. Once you\u2019re happy, expand to more routes or services. During CA rotation, trust both old and new CAs with a bundle, then remove the old one after clients switch. Tiny, controlled steps beat big\u2011bang changes every time.&#8221;<br \/>\n    },<br \/>\n    {<br \/>\n      &#8220;question&#8221;: &#8220;What\u2019s the difference between verifying clients at Nginx\/Caddy and in the app?&#8221;,<br \/>\n      &#8220;answer&#8221;: &#8220;Think of the proxy as the bouncer checking IDs, and the app as the host deciding where to seat people. The proxy should handle TLS verification and pass the identity along in headers. The app then authorizes actions based on that identity. Doing both gives you defense in depth: strong identity at the edge, and business rules where they belong.&#8221;<br \/>\n    }<br \/>\n  ]<br \/>\n}<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>{ &#8220;title&#8221;: &#8220;The Calm Way to mTLS: How I Set Up Certificate Verification on Nginx and Caddy for APIs and Microservices&#8221;, &#8220;content&#8221;: &#8220; Ever had that moment when your logs show a mysterious spike in API traffic at 3 a.m., but nothing seems strange on the app side? I remember a night like that with [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":1660,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-1659","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1659","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=1659"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/1659\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/1660"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=1659"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=1659"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=1659"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}