Technology

Cloudflare Tunnel with cloudflared: The Calm, Zero‑Trust Way to Publish Apps Without Opening a Single Port

It started on a Tuesday afternoon when I was migrating a little internal dashboard for a client. Nothing mission critical, just a tidy web app that showed inventory levels and a couple of metrics. The devs wanted to demo it quickly to a partner. You know the drill: “Can you open port 443 on that VPS and add another subdomain?” I felt my shoulders tighten. Not because it was hard, but because I didn’t want to play whack‑a‑mole with firewalls and expose yet another service to the entire internet. So I took a breath and said, “Give me twenty minutes. No ports. Zero drama.” That’s when Cloudflare Tunnel and cloudflared did their usual magic. We published the app behind Zero‑Trust Access, added mTLS where it mattered, and kept every inbound port closed like a bank vault at midnight.

If you’ve ever found yourself dreading NAT rules, hairpinning, reverse proxies, and firewall exceptions just to show a page to three people, this one’s for you. In the next sections, I’ll walk you through how Cloudflare Tunnel and the cloudflared connector let you publish apps without public ingress, what mutual TLS (mTLS) actually buys you here, how to layer Zero‑Trust Access policies like a comfortable security blanket, and the small, real‑world lessons I’ve learned running tunnels for everything from hobby home labs to production control planes. We’ll keep it friendly, practical, and honest about the tradeoffs.

The Aha Moment: Publishing Without Opening Ports

Here’s the thing: the old mental model says, “If the outside world needs to see my app, I must open a port and point DNS at my server.” That’s also when you start worrying about scans, bots, and the occasional mishap where the wrong config lets the wrong thing show up. I used to live in that headspace until cloudflared flipped the direction of the connection. With Cloudflare Tunnel, your server doesn’t listen for strangers on the internet. It dials out to Cloudflare over an encrypted, mutually authenticated connection. When someone visits your domain, Cloudflare serves them and then forwards the request back down that secure pipe to your app. No inbound firewall rules. No public IP. No knocking on your door.

Think of it like this: instead of leaving your front door unlocked and trusting a doorbell camera, you keep the door bolted and use a secret, monitored hallway with a guard at each end. One guard is Cloudflare’s edge, the other is cloudflared running on your box. They know each other by name, exchange mutual TLS certificates, and refuse to talk to anyone else. If you’ve ever been behind CGNAT or a restrictive corporate network and still needed to reach your app, you know how liberating that can feel.

What sold me wasn’t just the convenience; it was the simplicity. When you don’t have to expose ports, you suddenly stop worrying about all the stuff that tends to pile on: rate limiting on the raw origin, constant hardening on the public edge, weird bots, and the occasional drive‑by vulnerability scan that lights up your logs. You let Cloudflare deal with eyeball traffic, TLS termination, DDoS, and routing, while your origin focuses on doing one thing well: serving your app safely, in private.

Where Zero‑Trust Access Fits: The Front Door You Actually Control

Publishing an app is only half the story. The other half is deciding who gets through. When I first tried Cloudflare Access, it felt like someone had gently peeled off three layers of complexity and handed me a clean canvas. You set an application (like dashboard.example.com), then stack rules such as “only members of this Google Workspace group,” “must be on company device,” “must use a security key,” or “allow this partner’s email domain for read‑only.” You can toss in SSO with your identity provider and—if you like—require mTLS device certificates. The result is a front door that feels less like a sticky keypad and more like a polite doorman who knows your face and asks the right questions.

One of my clients had a staging environment they never wanted public, but it needed to be handy for contractors. The old way was, “Let’s create VPN accounts and hope they disconnect properly when they’re done.” The new way was, “Add them to a group, give them Access, and set a sunset rule.” They’d hit the URL, get prompted by Cloudflare, and be inside in seconds—with full auditing on who did what and when. When they finished the project, we removed them from the policy and the door closed behind them automatically.

Access also plays nicely with non‑browser clients. If you’re exposing a service that needs automation—say, a webhook or a CI job—you can use service tokens. That’s basically an application‑specific identity that bypasses the human SSO prompts but still proves it’s a legitimate client. You can even combine these with device posture checks when humans are involved, which keeps the whole thing sane without making everyone jump through hoops every five minutes.

From Zero to Published: My Calm, Repeatable Setup

Let’s walk through the flow I use when I want something online, quickly, without opening ports. I’m assuming your domain is on Cloudflare already. If not, move the DNS over and give the nameservers a moment to settle. You don’t need to do anything fancy in DNS yet; the tunnel can create the record for you.

First, install cloudflared on the machine that runs your app. The docs are clear and up‑to‑date, but the gist is: grab the package for your OS, install it, and verify the version.

Helpful docs if you want the official guide: how to install cloudflared, and the general Cloudflare Tunnel overview.

Once installed, authenticate cloudflared with your Cloudflare account. This opens a browser window where you pick the zone to authorize. Behind the scenes, it creates credentials that cloudflared will use to build that mutually authenticated connection to the edge.

cloudflared tunnel login

Then, create a named tunnel. I like giving it a human name so I remember what it’s for.

cloudflared tunnel create demo-dashboard

This writes a credentials file and prints a tunnel ID. Next, I make a config file that maps a hostname to my local service. You can use ports on localhost, unix sockets for some services, or even upstream HTTPS if you’ve done your own origin TLS. Here’s a simple example where my app listens on port 3000.

# ~/.cloudflared/config.yml
tunnel: demo-dashboard
credentials-file: /home/ubuntu/.cloudflared/<tunnel-id>.json

ingress:
  - hostname: dashboard.example.com
    service: http://localhost:3000
  - service: http_status:404

Now wire up DNS. You can tell cloudflared to create a proxied CNAME that routes traffic for dashboard.example.com through the tunnel.

cloudflared tunnel route dns demo-dashboard dashboard.example.com

Finally, run the tunnel. For quick tests, I do it in the foreground; for production, I install the service so systemd keeps it alive across reboots.

# Quick test
cloudflared tunnel run demo-dashboard

# Or install as a service (varies slightly by OS)
sudo cloudflared service install

If everything’s aligned, you should be able to visit dashboard.example.com in your browser and hit your app. It’ll feel like a normal site, but under the hood, there are no open inbound ports on your server. The only connection that exists is your cloudflared process reaching out to Cloudflare’s edge over mTLS.

Next, I jump into the Zero Trust dashboard and add an Access application for the hostname. This is where you set who gets in. Maybe it’s “anyone in my company domain,” or “only these three people,” or “must pass both SSO and hardware key.” The nice part is you can layer policies and test them safely. If you’re helping a partner or a contractor, leave yourself a quick backdoor rule for your admin email so you don’t lock yourself out during experiments. Then remove it when you’re done.

mTLS Without the Mystery: Where It Lives in the Tunnel Flow

Mutual TLS can sound intimidating if you’ve only used one‑sided TLS before—where the server proves who it is to the client, but the client stays anonymous. With Cloudflare Tunnel, mTLS is native between Cloudflare and your cloudflared connector. They exchange certificates, prove identity both ways, and refuse to talk to impostors. That’s table stakes for this architecture. Your tunnel credentials anchor that trust.

From the browser to Cloudflare, you’ll have normal HTTPS. That’s the usual TLS termination at the edge, with all the web optimizations and protections Cloudflare offers. From Cloudflare to your cloudflared, it’s mTLS by design. And then from cloudflared to your local app, you get to choose. If your app speaks HTTP on localhost, cloudflared will talk HTTP. If your app speaks HTTPS with a self‑signed certificate, you can tell cloudflared to verify it against a custom CA bundle, which keeps that final hop honest too.

Here’s a config example that validates a self‑signed origin cert. I like this when the local hop might be on a different machine or when I want belt‑and‑suspenders even on localhost.

ingress:
  - hostname: dashboard.example.com
    service: https://localhost:3443
    originRequest:
      originServerName: dashboard.internal
      noTLSVerify: false
      caPool: /etc/ssl/private/my-origin-ca.pem
  - service: http_status:404

The originServerName helps with SNI when your origin expects a specific name. The caPool points to a file containing the certificate of the CA that issued your origin cert (often your own small CA). Set noTLSVerify to false so verification actually happens. If any of that fails, the request doesn’t pass through—exactly what we want.

Now, you might be thinking, “What if I’m not using Tunnel and still want edge‑to‑origin mTLS?” In that case, look into Authenticated Origin Pulls. It’s a great way to confirm that incoming HTTPS traffic really came through Cloudflare and not directly from some random IP posing as a real visitor. I wrote more about that in this friendly guide to Authenticated Origin Pulls and mTLS, which pairs nicely with what we’re doing here.

Access Policies That Actually Match Real Life

In my experience, the magic of Access is that it shapes itself to how people really work. A few patterns I use again and again:

For browser apps, I set an explicit application to the hostname and require SSO with my identity provider. I often add a rule that checks email domain or group membership, then a step‑up method like a security key for admin routes. When device posture matters—say, only managed laptops should touch staging—I turn on the posture rule. The onboarding is as simple as, “Log in with your work account,” and the cleanup when someone leaves is automatic because your IdP controls their status.

For automation, webhooks, and CI, service tokens are the sane alternative to long‑lived passwords on a private endpoint. You generate a client ID and secret in Access, then include them in requests as headers. The request hits Cloudflare, gets recognized as legitimate, and passes through to your tunnel—no human prompts involved. I like that I can rotate these tokens without tearing up infrastructure, and I can scope them per app so there’s no messy cross‑use.

For SSH, the built‑in Access workflows are smooth. You can use a short‑lived certificate flow or a ProxyCommand through cloudflared. The point is, your server never needs port 22 open to the world. I remember setting this up for a small team that needed occasional shell access to a staging node. We had SSH over Access within the hour, all without exposing a single port. The developer experience stayed familiar: ssh user@host, with a little behind‑the‑scenes magic handling the identity checks.

For databases, there’s TCP tunneling. I’m careful here—databases are sensitive and deserve extra guardrails—but the workflow is similar. Cloudflared can forward a local port to a remote Access‑protected hostname. Your DB client thinks it’s talking to localhost, while under the hood the packets are going through the tunnel with the same Zero‑Trust checks at the edge. If I do this, I tend to add strict firewall rules so the DB only listens on loopback, and I log connections aggressively.

And yes, there are a few quirks to keep in mind. If you’re doing websockets, SSE, or gRPC, test early. The good news is the tunnel pipeline supports these well, but I like to verify the behavior with a quick canary deployment before I invite more people. When in doubt, run cloudflared with higher verbosity and watch the logs as you click around the app. Logs don’t lie, and it’s often the fastest path to clarity.

High Availability, Updates, and the “Sleep‑At‑Night” Bits

One of my favorite things about Cloudflare Tunnel is how easy it is to make it resilient. You can run multiple instances of cloudflared for the same named tunnel—on the same server or on different servers. Cloudflare will treat them as redundant connectors; if one drops, traffic just flows to the others. I once had a tunnel with three connectors: one on the primary host, one on a small sidecar VM in the same region, and one in a different region entirely. When we took the primary down for maintenance, nobody noticed. That’s the level of calm I aim for.

As for updates, cloudflared releases come pretty regularly. I’m a fan of pinning to a known‑good version in production and scheduling maintenance windows to roll forward. On a dev box, sure, I’ll track latest and see what’s new. On a revenue‑generating app, I like predictability. The daemon plays well with systemd; I set Restart to always, a healthy limit on restarts in case something really goes sideways, and structured logs to a file that rotates predictably.

Secrets are straightforward here: the tunnel credentials file is your crown jewel. Treat it like any other secret: limit its permissions, store backups carefully, and don’t paste it into random places. If you’re using service tokens for automation, rotate them on a schedule you can live with—thirty, sixty, or ninety days—and keep a short playbook of where they’re used so rotation doesn’t become a scavenger hunt.

On the network side, one thing many people overlook is egress policy. Even though you’re not exposing inbound ports, your server still reaches out to Cloudflare. If you’re in a locked‑down environment, be explicit about allowing those egress connections. I once had a firewall appliance drop traffic silently because of a new inspection rule. The tunnel would connect, then get ratcheted down to a trickle under load. The fix took five minutes once we knew the cause, but the lesson stuck: grab a small allowlist for outgoing connections to Cloudflare endpoints and lock it in.

Real‑World Troubleshooting: Read the Tea Leaves

Let’s keep it real: when something breaks, it often looks like a generic browser error or a vague 5xx. Here’s how I triage without raising my pulse.

If I see a 403 at the browser, I think “Access policy.” Maybe I’m not meeting the rule (wrong account, missing group, expired session). I check the Zero Trust dashboard for the app and look at recent events—there’s usually a clear reason recorded. If it’s a 5xx like 502 or 504, I think “tunnel” or “origin”. Is cloudflared up? Does my app respond locally? I ssh in (through Access, naturally) and curl localhost. If that works, I run cloudflared in the foreground for a minute to watch logs while I refresh the page. Nine times out of ten, the logs point straight to the issue.

For oddities with tokens or automation, I’ll use the cloudflared Access utilities to fetch a token and test a request by hand. If I’m protecting an API with service tokens, I’ll craft a curl with CF‑Access headers to make sure it passes the gate. This is also where time skew can bite you; if a server’s clock drifts, token validation gets cranky. I keep NTP healthy and logs clean so I can see time issues coming from a mile away.

If you’re doing origin TLS verification on the final hop—remember that caPool trick—mismatched names or an expired cert will produce clear, testable errors. That’s a feature. The worst bugs in security are the ones that fail open. Tunnels tend to fail closed, which means a little extra debugging now saves you a worse day later.

Beyond Basics: Multiple Apps, Wildcards, and Private Networks

On day one, you’ll probably publish one hostname to one local service. By week two, you’ll get ambitious. The nice part is cloudflared doesn’t mind. You can stack multiple hostnames and services in ingress rules and reuse the same tunnel. I’ve had a single tunnel handle a half‑dozen internal tools by mapping each subdomain to a different local port or socket. You can even go the other way and put cloudflared on a small “ingress” VM that routes to multiple backend servers on a private network. Tidy, centralized, and easy to observe.

Wildcards are handy if you spin up ephemeral environments. You can define a wildcard hostname in Access and use dynamic routing in your app to map preview‑123.yourdomain.com to the right container or namespace. Just keep an eye on Access policies so you don’t accidentally grant more than you intend. When in doubt, start with explicit hostnames and graduate to wildcards once your patterns are stable.

If you’re thinking about private network access—where users reach internal IPs like 10.0.0.10 through Zero‑Trust—you’ll bump into the WARP client and some routing magic. It’s a broader topic, but the takeaway is you can stitch your private networks to Cloudflare and present them safely without VPN sprawl. The same Access policies still apply. Identity remains the backbone; the tunnel is just the discreet, encrypted roadway.

Security Posture: What Changes, What Doesn’t

Let’s talk about the mental shift. When you stop opening ports, your attack surface shrinks in a way you can feel. The absence of ambient noise is almost eerie. Your logs stop yelling about opportunistic scans. Your firewall becomes a simple deny‑all with a few egress allowances. There’s still work to do, of course. You patch your boxes. You update cloudflared periodically. You keep origin secrets safe, rotate tokens, and document access. But the rhythm changes from defensive whack‑a‑mole to something calmer and more sustainable.

And for those moments when you’re not using a tunnel—because sometimes you can’t, or because you have a legacy vendor integration that requires direct ingress—remember you can still tighten the screws. Authenticated Origin Pulls, mTLS between edge and origin, rate limits at the edge, and strict firewall rules that only allow Cloudflare IP ranges will keep surprises to a minimum. It’s not all‑or‑nothing. The tunnel is one powerful pattern in a toolbox that gets better the more you use it.

A Small Playbook You Can Copy

Here’s the flow I share with teams that want something safe, fast, and low‑maintenance:

Start with the app on localhost, fully working. Then install cloudflared and create a named tunnel. Map a hostname to your local port, create the DNS route, and verify you can reach it. Add an Access application and start with a simple rule (only your email) until you’re confident. Graduate to your real policy—SSO group, device posture, MFA—once the app feels stable. If you need automation, mint service tokens and test one call by hand before wiring them into CI. If you want extra assurance on the final hop, add a self‑signed cert to your app and teach cloudflared to verify it with caPool. Turn on a second connector for the same tunnel when uptime matters. Finally, write down your two or three troubleshooting steps so anyone on the team can restart the tunnel, check logs, and curl localhost under a little pressure.

And if you prefer official documentation at your elbow—especially for Access policies—keep this bookmark handy: Access policies explained nicely. It’s one of those links I open during workshops because it covers edge cases without getting preachy.

Pricing, Limits, and the Quiet Economics

I won’t wade into line‑item pricing here because it changes over time, but I will share the part that matters in practice: the fewer public edges you run yourself, the fewer moving parts you pay for and maintain. When your only exposed surface is a well‑muscled global edge, your origin spends its days quietly doing origin things. You stop buying bigger firewalls just to feel safe. You stop sprinkling fail2ban on every VM like oregano. And when an auditor asks how the app is reachable, you can point to a simple diagram: browser to Cloudflare, Cloudflare to tunnel over mTLS, tunnel to localhost—with Access policies deciding who even gets to knock.

A Quick Word on Certificates and Custom Domains

One question I hear a lot: “Do I still need to manage certificates?” For the public face of your site, Cloudflare handles TLS at the edge, so you’re covered there. If you do origin TLS between cloudflared and your app, you’ll manage that small internal cert. Some teams roll a tiny internal CA and set a long validity with organized rotation. Others use short‑lived certs via automation. Keep it simple; the internal hop is narrow and well understood. If you’re doing multi‑tenant setups—letting customers bring their own domains or spinning SSL for lots of hostnames—lean on automation. I’ve had great results using DNS‑based ACME flows that issue certs without poking holes in firewalls or relying on flaky HTTP challenges.

Stories From the Field: Why This Sticks

I remember helping a small startup that kept getting caught by surprise in the middle of the night. A staging port was exposed “temporarily,” then forgotten. A bot found it and knocked the instance around until swap got messy. Nothing catastrophic, just exhausting. We moved their staging behind a tunnel and Access in a morning. That night was quiet. Their CTO sent me a screenshot of a Slack channel that used to ping during every scan spike. It was blank. They joked about framing it.

Another client had strict compliance requirements. The auditors asked for proof that external traffic could not reach the database, nor the admin panel, nor the debug endpoints. With tunnels, we showed that the server accepted no inbound connections at all—full stop. Then we walked through the Access logs demonstrating who authenticated, from where, and when. It turned into a pleasant conversation rather than a defensive audit. That’s a nice shift.

Wrap‑Up: The Calm Way to Put Things on the Internet

So, where does this leave us? If you’re tired of juggling firewalls and hoping today isn’t the day a stray port gets scanned into oblivion, Cloudflare Tunnel with cloudflared is a gentle step toward sanity. You keep your origin private. You let a global edge meet the public. You add Access policies that reflect how your team actually works, not how you wish they worked. And you sprinkle in mTLS where it matters, so each piece proves itself before any data moves.

My advice is simple: start small. Pick a non‑critical app and publish it through a tunnel. Add Access for just your email at first. Then dial up the rules you need—SSO, device posture, maybe mTLS for specific clients. If you want extra assurance on the final hop, teach cloudflared to verify a self‑signed origin cert. Once you feel the quiet, you won’t want to go back to open ports. And if you ever need to live in both worlds for a while, that’s fine too. It’s a journey, not a flag you plant in a day.

Hope this was helpful. If you’d like a deeper dive into edge‑to‑origin mTLS without tunnels, take a look at the primer I mentioned above. And if you try this flow, tell me how it goes. I love hearing the “we slept better” stories. See you in the next post.

Frequently Asked Questions

Great question! In most cases, no. Your server makes an outbound, mutually authenticated connection to Cloudflare using cloudflared, so you don’t need to expose any inbound ports or rely on a public‑facing IP. DNS points the hostname to the tunnel at Cloudflare’s edge, and traffic rides the secure tunnel back to your app.

Here’s the deal: browser traffic to Cloudflare is standard HTTPS, Cloudflare to cloudflared uses mTLS by design, and the final hop from cloudflared to your app can be HTTP or validated HTTPS. For Zero‑Trust Access, you can also require mTLS client certificates for users or devices, adding an extra identity check on top of SSO and MFA.

I like a gentle rollout. Install cloudflared, create a named tunnel, map a test hostname to your app on localhost, and verify it loads. Then add a basic Access policy that only allows your email. Once that’s smooth, layer in SSO, groups, device posture, or service tokens for automation. Keep logs open while you test—they’ll guide you.