Technology

Setting Up Varnish Cache in Front of Nginx/Apache for Serious Performance Gains

If you run a busy WordPress, WooCommerce, Laravel or custom PHP site, you eventually hit the same wall: PHP and the database become the bottleneck. You add OPcache, tune MySQL, maybe even move to a faster NVMe VPS, yet under peak load your application server still struggles. At that point, putting a powerful HTTP cache in front of your web server is often the single biggest win you can get. That is exactly where Varnish Cache shines as a reverse proxy in front of Nginx or Apache.

In this guide we will walk through how Varnish works, how to place it in front of Nginx or Apache on a VPS or dedicated server, and how to tune it so it actually boosts performance for real applications instead of randomly serving stale HTML. We will combine practical configuration snippets with architectural tips, and we will also look at how to measure the real gains in requests per second, TTFB and Core Web Vitals. All examples are written from the perspective of how we design and operate stacks at dchost.com, so you can reuse the same patterns on your own infrastructure.

Why Put Varnish in Front of Nginx or Apache?

Varnish is a high-performance HTTP accelerator designed to sit between clients (browsers, bots, APIs) and your origin web server. Instead of every request hitting PHP and the database, Varnish stores rendered pages in memory and serves them directly until they expire or are purged.

When deployed correctly as a reverse proxy, Varnish can:

  • Increase requests per second by 5–50x for cacheable pages, because responses come directly from RAM instead of PHP/MySQL.
  • Reduce TTFB from 400–800 ms down to 20–50 ms for cached responses.
  • Flatten traffic spikes during campaigns or news events; instead of hammering PHP, thousands of concurrent users are served from cache.
  • Lower CPU and I/O usage on your Nginx/Apache+PHP stack, giving more headroom for dynamic operations like checkout, dashboards or APIs.

We often pair Varnish with other optimizations like correct PHP OPcache settings for WordPress and Laravel, tuned MySQL, and NVMe storage. When that combination is built on a properly sized VPS or dedicated server, the performance jump is dramatic.

How Varnish Works as a Reverse Proxy

Before touching any configuration, it is important to understand how Varnish fits into the request flow and how its caching logic works.

Basic request flow

In a simple Varnish+Nginx setup, the flow typically looks like this:

  • The client sends an HTTP request to your server's IP on port 80.
  • Varnish listens on port 80 and receives the request.
  • Varnish decides whether the request is cacheable, can be served from cache, or must go to the backend.
  • If cached, Varnish serves the response directly from memory.
  • If not cached, Varnish forwards the request to Nginx/Apache (the backend), typically listening on 8080 or 8081.
  • Nginx/Apache processes the request (including PHP, database, etc.) and returns a response to Varnish.
  • Varnish optionally stores the response in cache according to rules (TTL, Cache-Control headers, cookies), then sends it back to the client.

VCL: the brain of Varnish

Varnish is controlled by a configuration language called VCL (Varnish Configuration Language). With VCL you:

  • Define backends (e.g. your Nginx or Apache instance).
  • Decide which URLs, methods and cookies should be cached.
  • Implement cache bypass rules for logged-in users or cart pages.
  • Control TTLs, grace periods, and how to handle errors.

If you are already comfortable writing Nginx configs or Apache vhosts, VCL will feel different but not scary: it is more like a mini-programming language for HTTP caching decisions.

What Varnish does (and does not) do

  • Varnish does cache HTTP responses and handle routing to backends.
  • Varnish does not handle HTTPS/TLS on its own. In production you usually terminate TLS with Nginx/Apache, HAProxy or Hitch and put Varnish behind that, or you put Varnish behind a CDN that does TLS.
  • Varnish is not a WAF or rate limiter; for that we usually rely on Nginx rules, ModSecurity, or external services. See our Cloudflare security settings guide for examples of combining caching with WAF and bot protection.

Planning Your Varnish + Nginx/Apache Architecture

There are a few realistic ways to arrange Varnish with Nginx or Apache. The "right" choice depends on where you terminate HTTPS and whether you also use a CDN.

Scenario 1: Simple HTTP stack on a single VPS

This is the easiest starting point for labs, staging, or internal applications without public TLS:

  • Client → Varnish on port 80
  • Varnish → Nginx/Apache backend on port 8080

Nginx/Apache serves only HTTP internally. You do not expose it to the internet directly; Varnish is the single entry point.

Scenario 2: TLS termination in Nginx, Varnish behind it

Because Varnish does not speak TLS, a very common production pattern is:

  • Client → Nginx (HTTPS 443)
  • Nginx → Varnish (HTTP, local port e.g. 6081)
  • Varnish → Nginx/Apache backend (HTTP 8080)

Nginx handles SSL/TLS (including HTTP/2/3, HSTS, OCSP stapling, Brotli/Gzip compression). Varnish then operates on plain HTTP traffic. You can learn more about the HTTPS side in our guides on HTTP/2 and HTTP/3 and Brotli and Gzip compression settings.

Scenario 3: CDN in front, Varnish in the origin

For high-traffic sites we frequently see:

  • Client → CDN (TLS, HTTP/2/3)
  • CDN → Varnish (HTTP or HTTPS, depending on setup)
  • Varnish → Nginx/Apache backend

The CDN offloads static assets and some HTML caching at the edge, while Varnish acts as an origin cache layer shielding your app servers. This approach is common for multi-region architectures or when you also use GeoDNS and multi-region hosting.

Ports and processes

A typical single-server layout for Varnish in front of Nginx looks like this:

  • Varnish listening on 80
  • Nginx listening on 8080 (HTTP) and maybe 443 (HTTPS) if you terminate TLS there as well

For Apache, you do the same: move Apache from 80/443 to 8080 (and optionally 8443) and let Varnish take 80.

Installing Varnish on a Typical Linux VPS

The example below assumes an Ubuntu/Debian VPS or dedicated server from dchost.com or a similar environment where you have root SSH access. The general steps are similar on AlmaLinux/Rocky with yum/dnf packages.

Step 1: Install Varnish

sudo apt update
sudo apt install varnish

On Debian/Ubuntu this will install Varnish and configure it to listen on port 6081 by default.

Step 2: Reconfigure Nginx to run on 8080

Edit your Nginx site configuration, for example:

sudo nano /etc/nginx/sites-available/default

Change:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    ...
}

to:

server {
    listen 8080 default_server;
    listen [::]:8080 default_server;
    ...
}

Then reload Nginx:

sudo nginx -t
sudo systemctl reload nginx

Step 3: Point Varnish to Nginx as backend

Edit the main VCL file (path may vary slightly by distribution):

sudo nano /etc/varnish/default.vcl

Set up a backend definition:

vcl 4.1;

backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

This tells Varnish to forward all cache misses to Nginx running on localhost:8080.

Step 4: Make Varnish listen on port 80

On Debian/Ubuntu, Varnish's listening port is controlled by the systemd service parameters in /etc/systemd/system/multi-user.target.wants/varnish.service (or a drop-in file like /etc/systemd/system/varnish.service.d/custom.conf depending on your version).

Look for a line similar to:

ExecStart=/usr/sbin/varnishd -a :6081 -T localhost:6082 -f /etc/varnish/default.vcl ...

Change -a :6081 to -a :80:

ExecStart=/usr/sbin/varnishd -a :80 -T localhost:6082 -f /etc/varnish/default.vcl ...

Reload systemd and restart Varnish:

sudo systemctl daemon-reload
sudo systemctl restart varnish

Now Varnish is on port 80, talking to Nginx on 8080. Test with:

curl -I http://your-domain.com

You should see headers like Via: 1.1 varnish and X-Varnish:, which confirm that Varnish is in the chain.

Example with Apache as backend

With Apache, the concept is the same: change Apache to listen on 8080 and let Varnish take port 80.

sudo nano /etc/apache2/ports.conf

Modify:

Listen 80

to:

Listen 8080

Then update your virtual hosts in /etc/apache2/sites-available/*.conf from <VirtualHost *:80> to <VirtualHost *:8080>, restart Apache, and configure Varnish's backend to use port 8080 as shown earlier.

Tuning Varnish for Real-World Apps

Simply installing Varnish in front of Nginx/Apache does not guarantee great performance. The real gains come from dialing in your cache rules so that:

  • Anonymous traffic is cached aggressively.
  • Logged-in sessions and carts are not cached (or are microcached safely).
  • Static assets are either served directly by Nginx or cached long-term by Varnish/CDN.

We already covered full-page caching strategies in our article Full-page caching for WordPress without breaking WooCommerce; here we'll focus on the Varnish-specific side.

Basic cache rules in VCL

Extend your default.vcl with caching decisions. A common pattern is:

sub vcl_recv {
    # Only cache GET and HEAD
    if (req.method != "GET" && req.method != "HEAD") {
        return (pass);
    }

    # Don&#039;t cache admin or login paths
    if (req.url ~ "wp-admin" || req.url ~ "wp-login.php") {
        return (pass);
    }

    # Don&#039;t cache when user is logged in (WordPress example)
    if (req.http.Cookie ~ "wordpress_logged_in_") {
        return (pass);
    }
}

sub vcl_backend_response {
    # Only cache HTTP 200 and 301
    if (beresp.status == 200 || beresp.status == 301) {
        # Set default TTL if backend didn&#039;t specify
        if (beresp.ttl <= 0s) {
            set beresp.ttl = 120s;
        }
    } else {
        set beresp.ttl = 0s;
    }
}

This is intentionally simple but already protects admin/logged-in users while caching anonymous pages for 2 minutes.

Microcaching vs long TTLs

For highly dynamic sites (news homepages, fast-changing e-commerce), we often start with microcaching: storing pages for 1–10 seconds. That may sound tiny, but it dramatically reduces the load when many users hit the same URLs simultaneously.

Example:

set beresp.ttl = 5s;
set beresp.grace = 30s;

grace allows Varnish to serve slightly stale content if the backend is slow or temporarily failing, while it refreshes the cache in the background. This is very effective during short spikes or brief backend hiccups.

Handling cookies correctly

Cookies are often the biggest enemy of caching. Many CMS and ecommerce platforms set unnecessary cookies for analytics or A/B testing, which can cause Varnish to create separate cache entries per cookie value or bypass the cache entirely.

In VCL, you can remove unnecessary cookies for anonymous users:

sub vcl_recv {
    # ... existing rules ...

    # For anonymous users, strip analytics cookies so responses can be cached
    if (!(req.http.Cookie ~ "wordpress_logged_in_")) {
        set req.http.Cookie = regsuball(req.http.Cookie,
            "(?i)(^|;s*)(_ga|_gid|_fbp)=[^;]*", "");
        # Remove empty cookie header
        if (req.http.Cookie == "" || req.http.Cookie == ";") {
            unset req.http.Cookie;
        }
    }
}

The exact cookie names will depend on your stack, but the idea is to keep only the cookies that truly matter for page variation or sessions.

Bypassing sensitive paths

Always bypass caching on:

  • Login pages
  • Cart/checkout steps
  • Account dashboards
  • APIs that must always be fresh (e.g. stock levels, personalized feeds)

You can do this by matching URL path patterns in vcl_recv and returning pass early:

if (req.url ~ "/cart" || req.url ~ "/checkout") {
    return (pass);
}

Static assets: let Nginx shine

Although Varnish can cache static assets, we often prefer to let Nginx handle them directly with long Cache-Control headers and optionally a CDN in front. Nginx is excellent at serving static files, especially when combined with Brotli or Gzip compression.

One practical model:

  • CDN edge caches images, CSS, JS for days or weeks.
  • Varnish focuses on HTML pages and maybe some JSON endpoints.
  • Nginx/Apache serves static files as the ultimate origin when needed.

Measuring and Proving Performance Gains

It is tempting to declare success as soon as you see Via: 1.1 varnish in response headers, but you will only know how much Varnish actually helps if you measure it carefully.

1. Baseline your site before enabling Varnish

Before you change anything, gather baseline numbers:

  • TTFB and LCP from PageSpeed Insights, WebPageTest or Lighthouse.
  • Average response time and RPS from a simple load test (k6, JMeter, wrk).
  • CPU/RAM usage on your VPS or dedicated server under moderate load.

If you need a refresher on proper speed testing, see our article how to properly test your website speed.

2. Check cache hit ratio

Varnish exposes useful statistics through varnishstat and varnishlog. A key metric is cache hit ratio (hits vs misses) for your main pages.

sudo varnishstat -1 | egrep "MAIN.cache_hit|MAIN.cache_miss"

Over time, you want to see hits grow much faster than misses for anonymous page requests. If your hit ratio stays low, most likely:

  • Cookies are preventing caching.
  • Your TTLs are too short, or Cache-Control headers from the backend forbid caching.
  • You are bypassing too many URLs in VCL.

3. Load-test with and without Varnish

On a staging environment, you can easily compare:

  • Direct Nginx/Apache on 8080.
  • Varnish-fronted stack on 80.

Use a load-testing tool (for example k6, which we covered in our guide on load testing your hosting before traffic spikes) and run identical scenarios:

  • Same URLs
  • Same number of virtual users
  • Same duration

You should see:

  • Significantly higher requests per second with Varnish.
  • Lower and more stable average/95th percentile response times.
  • Lower CPU and IO wait on the origin server.

4. Watch Core Web Vitals

Core Web Vitals (LCP, FID, CLS) are influenced by many factors, but TTFB is a big component of LCP. With a warm Varnish cache and solid server-side tuning (PHP-FPM, OPcache, database), it becomes much easier to keep LCP within Google's recommended thresholds, especially on mobile connections.

We dive deeper into the hosting side of Core Web Vitals in our article how server choices impact TTFB, LCP and CLS.

When Varnish Is (and Isn't) a Good Idea

Varnish is powerful, but it is not a silver bullet for every workload. Here is a realistic view of where it shines and where it might add unnecessary complexity.

Great use cases for Varnish

  • Content-heavy WordPress/blog/news sites with mostly public content. Full-page caching can often handle 90–99% of requests.
  • Marketing and landing-page sites where content changes only at deploys or a few times per day.
  • Hybrid WooCommerce stores where category/product pages are cached for anonymous users, and only cart/checkout remain dynamic.
  • Multi-tenant apps with shared templates but different hostnames, where Varnish can cache per hostname.

Less ideal or advanced scenarios

  • Heavily personalized dashboards where almost every element is user-specific.
  • Real-time applications relying on WebSockets or long-lived HTTP connections (chat, streaming data, etc.). Here, using Varnish requires careful exception rules or a different architecture.
  • Complex APIs where clients depend on real-time responses. Microcaching can help but must be done very carefully.

In these cases, start by tuning PHP-FPM, OPcache, and your database first, and consider alternatives like Nginx microcaching or Redis object cache before adding a full Varnish layer.

Putting It All Together on dchost.com Infrastructure

On our side, when a customer on a VPS, dedicated server or colocation setup at dchost.com asks for "serious speed" for a PHP-based site, our typical playbook looks like this:

  1. Right-size the server (vCPU, RAM, NVMe storage) for the expected traffic and database size.
  2. Install and tune PHP-FPM and OPcache for the specific application (WordPress, WooCommerce, Laravel, custom PHP).
  3. Optimize MySQL/MariaDB or PostgreSQL, sometimes adding replication if the workload requires it.
  4. Configure Nginx or Apache properly (keep-alive, compression, HTTP/2/3 where relevant).
  5. Add Varnish as a reverse proxy in front of Nginx/Apache for full-page caching of anonymous traffic.
  6. Optionally place a CDN/WAF in front for global reach and additional protection.

By layering Varnish on top of a cleanly tuned origin stack, we avoid treating caching as a band-aid and instead use it as a force multiplier.

Summary and Next Steps

Varnish Cache, when deployed as a reverse proxy in front of Nginx or Apache, can transform how your infrastructure behaves under load. Instead of every user hitting PHP and the database, you serve most of your traffic directly from memory with sub-50 ms TTFB, while your origin servers focus on the truly dynamic and personalized parts of your application.

The key steps are straightforward: move Nginx/Apache to an internal port, point Varnish at it as a backend, tune your VCL to cache anonymous traffic while bypassing sensitive paths and sessions, then measure the gains with real load tests and Core Web Vitals. Combined with proper PHP, database and TLS tuning, you can comfortably absorb traffic spikes and grow without constantly fighting CPU and IO limits.

If you are planning a new project or outgrowing shared hosting, our team at dchost.com can help you design a VPS, dedicated server or colocation architecture that makes the most of Varnish, Nginx/Apache and your application stack. Whether you want a simple single-server setup or a multi-region origin with CDN and advanced caching rules, you can build on the patterns we have outlined here and adapt them to your own workloads.

Frequently Asked Questions

Varnish and a CDN solve different but complementary problems. Varnish accelerates traffic close to your application servers, acting as an origin shield and dramatically reducing load on Nginx/Apache and PHP. A CDN distributes cached content across multiple edge locations worldwide, reducing latency for visitors far from your data center and protecting your origin from global spikes. For small, regional sites, Varnish alone on a well‑sized VPS may be enough. For international audiences or very high traffic, combining Varnish at the origin with a CDN in front usually delivers the best mix of speed, resilience and bandwidth savings.

Yes, but you must be very careful about what you cache. The general pattern is: fully cache public pages (home, categories, product details) for anonymous users, and bypass cache on login, cart, checkout and account pages. You also need to bypass for logged‑in users by checking session cookies in VCL (for example, "wordpress_logged_in_" for WooCommerce). With strict rules, Varnish can dramatically reduce load on busy product pages without ever caching personalized carts or payment steps. Our experience is that this hybrid model works well for most stores when paired with proper TLS, database, and PHP tuning.

Varnish itself speaks plain HTTP, so you need to terminate TLS before traffic reaches Varnish. In practice, this means either placing Nginx or Apache in front of Varnish to handle HTTPS and HTTP/2/3, or using a dedicated TLS terminator like Hitch or HAProxy, or a CDN that does TLS at the edge and talks HTTP to your origin. A common pattern is: client → Nginx (HTTPS) → Varnish (HTTP) → Nginx/Apache backend (HTTP). Nginx manages SSL certificates, HSTS and OCSP stapling, while Varnish focuses purely on caching and backend routing.

Varnish is very memory‑centric: it performs best when most frequently requested objects fit in RAM. For small sites, 1–2 GB of RAM dedicated to Varnish can already deliver big gains. For larger news portals or e‑commerce catalogs, allocating more memory (and using fast NVMe storage for the OS and logs) helps maintain a high cache hit ratio. CPU requirements are usually modest compared to PHP and databases, but network bandwidth and low IO wait are important. On dchost.com VPS and dedicated servers, we typically size RAM for both the application stack and a generous Varnish cache, then validate with real load tests.

Both approaches can deliver excellent results; they just live at different layers. Nginx microcaching stores responses for a few seconds directly in Nginx, which is simple to configure and great for quick wins. Varnish is a dedicated HTTP accelerator with a richer configuration language (VCL), better observability tools, and more advanced cache policies (grace, saint mode, fine‑grained cookie handling). For smaller setups, Nginx microcaching may be enough. As your traffic grows or your cache rules become more complex, Varnish usually offers more control and scalability while offloading work from your web server.