{
“title”: “The Calm Way to mTLS: How I Set Up Certificate Verification on Nginx and Caddy for APIs and Microservices”,
“content”: “
Ever had that moment when your logs show a mysterious spike in API traffic at 3 a.m., but nothing seems strange on the app side? I remember a night like that with a client’s microservice stack. Everything looked “okay,” yet requests were slipping through using a token that shouldn’t have worked from where they came from. We had rate limits, we had API keys, we even had IP allowlists. Still, it didn’t feel like the API knew who was talking to it. It felt like a club checking tickets but not IDs.
That’s when mTLS clicked for me—mutual TLS, where both sides show their credentials. It’s a two-way handshake: the server proves who it is, and the client proves who it is, too. Suddenly, your API isn’t just “open with a password.” It’s a conversation between verified identities. And if you design it right, it’s surprisingly calm to manage.
In this guide, I’ll walk you through setting up mTLS on both Nginx and Caddy—two web servers I use a lot for APIs and microservices. We’ll talk about issuing client certificates, verifying them at the edge, passing identity through to backends, doing mTLS on upstream connections, rotating certs without drama, and testing your setup without losing your weekend. I’ll share real-world tips that saved me headaches, and yes, some configs you can copy and tweak right away.
İçindekiler
- 1 How mTLS Fits in the Real World (And Why It’s Not as Scary as It Sounds)
- 2 Before You Start: Issuing Client Certificates the Friendly Way
- 3 Nginx: Verifying Client Certificates at the Edge
- 4 Caddy: Clean, Composable mTLS With Minimal Fuss
- 5 Rotating Certificates Without Drama (The Practical Playbook)
- 6 How to Test Without Losing Your Weekend
- 7 Design Tips I Wish Someone Had Told Me
- 8 A Real-World Pattern: Secure Webhooks and Internal Calls
- 9 A Quick Nudge: mTLS for Admin Panels
- 10 Common Gotchas (And the Tiny Fixes)
- 11 Wrap‑up: Make Your Services Shake Hands, Not Just Wave
How mTLS Fits in the Real World (And Why It’s Not as Scary as It Sounds)
Here’s the thing about mTLS: it’s not a silver bullet for everything. But for service-to-service trust or critical endpoints—think payment webhooks, internal APIs, admin interfaces—it’s rock solid. Instead of trusting a shared secret floating around, you’re leaning on certificates signed by a CA you control. It’s identity with receipts.
Think of it like a VIP section at a venue. Your server SSL cert is the venue’s neon sign saying “This is the real place.” The client cert is the wristband, signed by the same organizer. The bouncer (your proxy) checks both, and if the wristband’s fake or from another event, it’s a polite but firm “not tonight.”
In my experience, mTLS shines when you want three things at once: you want to know exactly who is calling your API, you want to be able to revoke access fast (by disabling a cert), and you want to avoid secret sprawl in environment variables and CI systems. It’s especially nice in microservice architectures where services talk a lot, because you can hand out short-lived certs and automate renewals.
Before You Start: Issuing Client Certificates the Friendly Way
You need a CA (Certificate Authority) to issue client certificates your server will trust. You can go simple and generate a quick CA with OpenSSL for a proof of concept. For production, I like tooling that makes rotation painless and gives you an audit trail—things like a private CA service or an internal PKI. If you’re starting fresh, Smallstep’s step-ca is a really friendly way to get short‑lived client certs into your services without turning your ops into paperwork.
Quick demo CA and client certs with OpenSSL
Here’s a fast way to stand up a demo CA and one client certificate. It’s not perfect PKI hygiene, but it’s enough to get mTLS working locally or in a lab.
# 1) Create a simple root CA (demo only!)
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 \
-keyout ca.key -out ca.crt -nodes -subj "/CN=Demo mTLS CA"
# 2) Create a client key + CSR
openssl req -newkey rsa:2048 -keyout client.key -out client.csr -nodes \
-subj "/CN=checkout-service/O=payments"
# 3) Sign the client cert with your CA
openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
-out client.crt -days 365 -sha256
# You now have: ca.crt (trusted CA), client.crt (client cert), client.key (client key)
One tip: put something meaningful in the subject. I like using the CN as the service name and O as a team or domain of responsibility. If you’re deep into zero-trust, consider using a SPIFFE ID in the SAN. Whatever you choose, stay consistent—future you will thank you during incident response.
How clients call your mTLS API
From a client’s point of view, nothing is wild here. They just present their cert and key over TLS. A quick check with curl tells you if the basics are right:
curl --cert client.crt --key client.key https://api.example.com/health
If you get a 200, you’re on your way. If you get blocked, we’ll walk through the most common reasons in a bit.
Nginx: Verifying Client Certificates at the Edge
Nginx is a dependable doorman for mTLS. You tell it which CA to trust and whether to demand a client cert, and it handles the verification. Then you can pass identity downstream with headers, restrict access, or even enforce per‑endpoint policies.
Basic Nginx server block for mTLS
This example assumes you already have a normal server certificate for your domain (from Let’s Encrypt or similar). We’re adding client verification on top.
server {
listen 443 ssl http2;
server_name api.example.com;
# Your normal server TLS certs
ssl_certificate /etc/ssl/certs/api.crt;
ssl_certificate_key /etc/ssl/private/api.key;
# Trust this CA for client certs
ssl_client_certificate /etc/ssl/ca/clients-ca.crt;
ssl_verify_client on; # require a valid client cert
ssl_verify_depth 2;
# Optional: friendlier error for client cert issues
error_page 495 496 = @mtls_failed;
# Pass client identity to upstreams
proxy_set_header X-Client-Verify $ssl_client_verify;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-Client-Cert $ssl_client_cert;
location / {
# If you like explicit checks...
if ($ssl_client_verify != SUCCESS) { return 403; }
proxy_pass http://api_backend;
}
location @mtls_failed {
return 403 "Client certificate required or invalid.";
}
}
upstream api_backend {
server 127.0.0.1:8080;
}
At this point, your API requires a client cert signed by clients-ca.crt. Nginx verifies the chain and depth, so if you’re using an intermediate CA, make sure the file contains the full chain needed for verification.
Allowlisting particular clients or teams
Let’s say you want only your “payments” clients to hit a certain endpoint. You can extract the subject DN and map it to a simple allow/deny flag. I often start with something simple like this and move to a registry or database policy later.
# Put this at http{} level or before the server block
map $ssl_client_s_dn $client_allowed {
default 0;
~*CN=checkout-service.*O=payments 1; # allow payments team checkout client
}
server {
# ... same TLS setup as above ...
location /payments/ {
if ($client_allowed = 0) { return 403; }
proxy_pass http://payments_backend;
}
}
Pro tip: if you embed identifiers in SANs instead of the subject, use the variables Nginx exposes for SANs or pass the raw cert to your app and inspect it there. Just be sure you never trust headers from the outside world for identity—only the ones you set after TLS verification.
Passing identity downstream, safely
When you proxy, you can pass the verified identity to your app as headers. I like setting these:
proxy_set_header X-Client-Verify $ssl_client_verify;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-Client-Cert $ssl_client_cert; # PEM, may be large
proxy_set_header X-Client-FP $ssl_client_fingerprint;
On the app side, log these on auth success/fail. It’s gold during investigations and helps you build strong audit trails.
mTLS from Nginx to the upstream service
Zero‑trust doesn’t stop at the edge. If your backend is also behind TLS and requires client certs, Nginx can present a client certificate as well and verify the upstream’s identity.
upstream secure_backend {
server 10.0.0.12:8443;
keepalive 32;
}
server {
# ...incoming mTLS setup...
location /internal/ {
proxy_pass https://secure_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_ssl_server_name on;
proxy_ssl_name backend.internal.example;
# Verify upstream server cert against your internal CA
proxy_ssl_trusted_certificate /etc/ssl/ca/services-ca.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 2;
# Present our client certificate to the upstream
proxy_ssl_certificate /etc/ssl/certs/gateway-client.crt;
proxy_ssl_certificate_key /etc/ssl/private/gateway-client.key;
# Optional: restrict ciphers/TLS versions
proxy_ssl_protocols TLSv1.2 TLSv1.3;
}
}
Couple of small but important details here. First, set proxy_ssl_server_name on; so SNI matches what the upstream expects. Second, use a proxy_ssl_trusted_certificate that contains the CA your upstream’s cert chains to. If you get a verification error, nine times out of ten it’s the chain or the SNI.
Logging and visibility: your future self will thank you
I like adding a log format that captures certificate bits. It’s invaluable when you’re troubleshooting access checks or tracing requests across services.
log_format mtls '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'client_verify=$ssl_client_verify '
'client_dn="$ssl_client_s_dn" '
'client_fp=$ssl_client_fingerprint';
access_log /var/log/nginx/api_mtls.log mtls;
Just remember that the full PEM can be huge, so I stick to DN and fingerprint in logs, and pass the PEM only to the app when needed.
Caddy: Clean, Composable mTLS With Minimal Fuss
Caddy has a knack for making TLS configurations feel simple. If you’ve used it for HTTPS automation, you’ll probably enjoy how it handles client authentication too. The core idea is the same: tell Caddy which CA to trust for client certificates, choose a mode (I use require_and_verify for APIs), and optionally forward identity to your app.
Basic Caddyfile for incoming mTLS
Here’s a straightforward example. Caddy will serve your site certificate and require clients to present a valid cert signed by your CA. The exact shape of the client_auth block can vary slightly by version, so if something looks off in your environment, peek at the docs for your release.
api.example.com {
# Caddy manages the server cert automatically if DNS is public
# For private/internal, you can use 'tls internal' or provide certs manually.
tls {
client_auth {
mode require_and_verify
# Trust this CA for client certs (PEM)
# In many Caddy versions you can specify a file:
# trusted_ca_cert_file /etc/ssl/ca/clients-ca.crt
# Or provide the PEM inline with 'trusted_ca_cert'
trusted_ca_cert -----BEGIN CERTIFICATE-----
MIID...your CA cert...
-----END CERTIFICATE-----
}
}
@health path /health
handle @health {
respond 200 "ok"
}
handle {
header_up X-Client-Verify {tls_client_verified}
header_up X-Client-DN {tls_client_subject}
header_up X-Client-FP {tls_client_fingerprint}
reverse_proxy 127.0.0.1:8080
}
}
Caddy exposes handy placeholders like {tls_client_verified} and {tls_client_subject} that you can forward to your app. If you only want mTLS on certain routes, put the client_auth block in a site that’s bound to those routes or use separate sites by hostname—Caddy’s routing is flexible.
mTLS to your upstream from Caddy
If your backend expects client certs, Caddy can present one as it proxies. You’ll also want to verify the upstream with your internal CA. This keeps the trust story consistent end to end.
internal-api.example.com {
tls {
client_auth {
mode require_and_verify
trusted_ca_cert_file /etc/ssl/ca/clients-ca.crt
}
}
handle {
reverse_proxy https://backend.internal.example:8443 {
transport http {
tls
tls_server_name backend.internal.example
# Trust the upstream's CA
root_ca /etc/ssl/ca/services-ca.crt
# Present our client cert to the upstream
client_certificate /etc/ssl/certs/gateway-client.crt /etc/ssl/private/gateway-client.key
}
}
}
}
In some versions, the directive names for CA trust and client certs are slightly different, so I always keep the Caddy TLS client authentication docs handy. The essence doesn’t change: set the trusted CA, require verification, and present a client certificate when going upstream.
Rotating Certificates Without Drama (The Practical Playbook)
Rotation is where mTLS projects either age gracefully or become a weekly fire drill. Here’s the rhythm that’s worked for me, even in busy environments.
1) Use short‑lived certs and automate issuance
Short‑lived certs give you natural key rotation. If your CA supports it, aim for days, not months. The good news is that good tooling makes this feel automatic. If you haven’t picked a system yet, explore something like step‑ca or a managed internal PKI that integrates with your deployment pipeline.
2) Store and deliver keys like they’re production data
Private keys deserve the same care you give secrets in your database—because they are secrets. In containers, use read‑only mounts where possible. In VMs, restrict permissions to the user running the proxy. A tiny permissions fix now saves you from a big incident later.
3) For CA rotation, trust two CAs during the handover
When rotating a CA, create a bundle that contains both the old and new CA and point your proxies to it. Nginx and Caddy will accept either chain during that period. Roll out new client certs signed by the new CA, then remove the old one after you’re confident everything is flipped.
# Example: trust bundle with two CAs
cat new-ca.crt old-ca.crt > clients-ca-bundle.crt
# Point Nginx/Caddy to clients-ca-bundle.crt during rotation window
On the client side, you can do the same trick for trusting servers while you shift your server cert chains.
4) Reloads without downtime
Reloading Nginx to pick up new certs is a single, graceful command that doesn’t drop connections. Caddy watches file changes and often hot‑reloads automatically. Either way, schedule rotations in predictable windows, monitor, and document the steps so anyone on the team can do it calmly.
5) Revoke fast, and log it
Sometimes you just need to switch off a client. If you’re using short‑lived certs, simply not renewing is enough. For longer-lived ones, use your CA’s revocation or maintain a denylist/allowlist in your proxy based on fingerprints. In critical cases, cut the CA and rotate to a new one. It sounds scary, but with bundles and staged rollouts, it’s a measured move—not a panic button.
How to Test Without Losing Your Weekend
Testing mTLS feels intimidating until you turn it into a simple checklist. I keep a little routine that has saved me from silly mistakes more times than I can count.
Sanity checks with curl
First, confirm the server enforces client certs. Without a cert, you should get bounced.
# Should be 403 or a custom error
curl -i https://api.example.com/health
Then try again with your client cert and key.
# Should be 200 now
curl -i --cert client.crt --key client.key https://api.example.com/health
If you’re working with an internal CA, don’t forget to trust it when verifying upstreams. For curl, that’s the --cacert option.
curl -i --cert client.crt --key client.key \
--cacert ca.crt https://api.example.com/health
Check the whole chain
If anything fails, validate the certificate chain. Is the intermediate in the right file? Does the client cert’s chain lead to the CA your proxy trusts? With Nginx, look closely at ssl_client_certificate. If you’re unsure, see the official notes on client cert verification in the Nginx SSL module docs. It’s the single page I revisit most when things act weird.
Make the app aware of identity
Return the client DN or fingerprint in a special test route so you see what the app receives. That takes the guesswork out of whether your headers are populated.
# Example pseudo-output from a /debug/identity endpoint:
{
"client_verify": "SUCCESS",
"client_dn": "CN=checkout-service,O=payments",
"client_fp": "F1:32:...:9B"
}
Design Tips I Wish Someone Had Told Me
A few things I’ve picked up the hard way, offered to save you from the same detours.
First, decide where “authorization” really lives. mTLS gives you strong authentication, but you still need to decide what a client can do. I like letting the gateway do broad allow/deny based on certificate identity, and then letting the app enforce resource-level rules. That way, even if you mess up an allowlist, the app still has final say.
Second, name certificates like a librarian. If your CN says “service‑42” today and “svc‑42‑green” tomorrow, your logs and maps will look like alphabet soup. Pick a pattern—CN as service name, O as team, SAN with a stable ID—and stick with it. Even better, store those mappings in code or config, not in someone’s head.
Third, don’t ignore the operational side: dashboards and alerts. Track counts of mTLS failures per route, fingerprints that get denied repeatedly, and success rates over time. When you see an odd pattern, you catch misconfigurations before your users do.
And finally, consider scope. Not every endpoint needs mTLS. Sometimes it’s perfect for admin, internal, or webhook endpoints, while the public API still uses tokens for customers. Mix and match intentionally.
A Real-World Pattern: Secure Webhooks and Internal Calls
One of my favorite uses for mTLS is securing inbound webhooks from a trusted partner or internal event bus. You can issue a client cert specifically for the webhook sender, verify it at the edge, and then validate payload signatures as a second layer. That gives you a belt‑and‑suspenders approach, and it’s very resilient against spoofing. Add rate limits and you’ve got a tidy little fortress.
Same idea applies to microservices talking within a cluster. The gateway requires client certs from services. Services that call upstreams present their own client certs. Every hop is mutually verified. If you’re moving to that model gradually, start with the most sensitive flows, then widen the circle as your certificate automation matures.
A Quick Nudge: mTLS for Admin Panels
While we’re here, mTLS is one of the calmest ways to lock down admin interfaces—panels, dashboards, anything you never want exposed to the open internet. If that topic is on your list, I wrote a step‑by‑step story about it: protecting admin panels with mTLS on Nginx. It builds on a lot of the same ideas and is a natural companion to this guide.
Common Gotchas (And the Tiny Fixes)
Here are the bumps I stumble over most often, and how I smooth them out.
If your client gets a 400/403 and you’re sure you passed a cert, check the chain file you set on the server. With Nginx, ssl_client_certificate must include intermediates if your client cert is signed by one. With Caddy, make sure your trusted CA matches the issuer exactly. When in doubt, rebuild the chain and test with openssl verify -CAfile.
If upstream mTLS fails, look at SNI. A mismatch between proxy_ssl_name and the upstream’s certificate SAN will cause verification errors even if the CA is correct. I learned that one the hard way at 2 a.m. once.
If logs look empty, double‑check your placeholders or variables. In Nginx, passing the raw PEM as a header can hit size limits; I prefer the fingerprint and DN. In Caddy, use the built‑in placeholders and ensure your reverse_proxy block forwards them with header_up.
If performance worries you, remember that TLS handshakes are the expensive part. Keep connections alive between your proxy and upstreams, and use HTTP/2 where it makes sense. For public endpoints, your users won’t notice the difference once connections are warm.
Wrap‑up: Make Your Services Shake Hands, Not Just Wave
When you boil it down, mTLS is about making sure each side knows who it’s talking to—no guesswork, no hoping a shared secret wasn’t leaked somewhere last quarter. Nginx and Caddy both make it surprisingly straightforward once you have a clean way to issue and rotate certificates. Start with a small slice of your system, like a webhook or an internal endpoint. Practice issuing a cert, verifying it at the edge, and passing identity through to your app. Then layer in upstream mTLS, rotation, and observability.
If you’re ever unsure, walk it back to basics: does the server trust the right CA, does the client present the right cert, and does the chain link up? With that mindset and a couple of handy commands, you’ll go from “why is this failing?” to “oh, it’s just the chain” in minutes instead of hours.
Hope this was helpful. If you try this and hit a weird edge case, write it down—you’ll probably save someone else a long night later. See you in the next post, and may your certs always be valid and your rotations blissfully boring.
“,
“focus_keyword”: “mTLS on Nginx and Caddy”,
“meta_description”: “Set up mTLS on Nginx and Caddy for APIs and microservices. Friendly steps for issuing client certs, safe rotation, upstream verification, and easy testing.”,
“faqs”: [
{
“question”: “Do I need a public CA for client certificates, or should I run my own?”,
“answer”: “Great question! For client certs, you typically run your own internal CA. That way you control who gets a cert, how long it lives, and how to revoke it. Public CAs are perfect for server certificates on public domains, but client auth is a private trust relationship, and an internal CA makes rotation and access control much calmer.”
},
{
“question”: “How do I roll out mTLS gradually without breaking everything?”,
“answer”: “Start with a single endpoint—ideally internal or a high‑value webhook. Issue one client cert, enforce mTLS there, and pass identity to your app so you can log it. Once you’re happy, expand to more routes or services. During CA rotation, trust both old and new CAs with a bundle, then remove the old one after clients switch. Tiny, controlled steps beat big‑bang changes every time.”
},
{
“question”: “What’s the difference between verifying clients at Nginx/Caddy and in the app?”,
“answer”: “Think of the proxy as the bouncer checking IDs, and the app as the host deciding where to seat people. The proxy should handle TLS verification and pass the identity along in headers. The app then authorizes actions based on that identity. Doing both gives you defense in depth: strong identity at the edge, and business rules where they belong.”
}
]
}
