Technology

Nginx Reverse Proxy and Simple Load Balancer Setup for Small Projects

For many small projects, the first deployment runs on a single VPS: web server, application, and database all on one machine. It works well at the start, but as traffic grows or you add more services (API, admin panel, background workers), things become harder to manage. SSL certificates are scattered, firewall rules get complex, and you have no easy way to add a second application server when you need more capacity. This is exactly where a lightweight Nginx reverse proxy and simple load balancer architecture solves real, everyday problems without forcing you into heavyweight, enterprise-style setups.

In this guide, we will walk through a practical, step-by-step Nginx configuration that we use frequently for dchost.com customers running small SaaS apps, WooCommerce stores, landing pages, or internal tools. We will keep the design intentionally simple: one front-facing Nginx reverse proxy and one or more backend application servers. You will learn why this pattern works so well for small projects, how to configure it on a VPS, and how to extend it into a basic load balancer when you need more performance or redundancy.

What a Nginx Reverse Proxy and Load Balancer Actually Do for You

Before touching any configuration files, it helps to be very clear about what role Nginx will play in this architecture. On your front server, Nginx will act as:

  • Reverse proxy: Accepts HTTP/HTTPS requests from the internet and forwards them to one or more internal application servers.
  • SSL terminator: Handles TLS/SSL certificates, so backends can speak plain HTTP on a private network.
  • Router: Sends different paths or hostnames to different backends (e.g. /api to one server, /app to another).
  • Simple load balancer: Distributes requests across multiple backend servers using built-in algorithms.

This architecture gives you several concrete benefits:

  • Centralised SSL management: You only manage certificates on the front Nginx server, instead of repeating the work on each app box.
  • Cleaner deployments: You can upgrade or replace backend servers without changing public DNS or touching client-side configs.
  • Easier scaling: When CPU or RAM becomes tight, you can add a second backend and let Nginx distribute the load.
  • Better separation of concerns: The front server focuses on HTTP, security headers, and caching; backend servers focus on PHP, Node.js, or other application runtimes.

If you are already using Nginx directly on a single VPS for your app, this guide will simply move that Nginx layer to a dedicated front server and wire it up cleanly to your backends.

Reference Architecture for Small Projects

Let’s define a concrete, minimal architecture we’ll build in this article.

  • Front VPS: Public-facing Nginx reverse proxy and load balancer. Has a public IP and DNS A/AAAA records for your domain. Handles SSL.
  • Backend VPS #1: Runs your main application (e.g. PHP-FPM, Node.js, Python, Ruby). Accessible from the front VPS over a private network or firewall-restricted public IP.
  • Optional Backend VPS #2: Identical or similar app server to share load or act as failover.
  • Database server: May live on Backend #1 for very small setups, or on a separate VPS/dedicated server when you grow.

At dchost.com we see this pattern a lot for:

  • New SaaS products that need a clean path to scale without re-architecture.
  • WooCommerce or custom e‑commerce sites preparing for future traffic spikes.
  • Internal dashboards and APIs where you want to isolate public traffic from the app machines.

If you are still at the “one VPS for everything” stage, you can apply the same concepts on a single machine (Nginx reverse proxy in front of multiple local services). Later, migrating to separate backend servers becomes much easier.

Preparing Your Servers: OS, Security and DNS

We will assume Ubuntu 22.04 or Debian 12 on all servers, but the Nginx configuration is almost identical on other modern Linux distributions.

1. Provision your servers

You will need at least:

  • 1 VPS for the front Nginx reverse proxy
  • 1 VPS for the backend application, ideally on the same data center or region for low latency

For most small projects, we’ve seen a good starting point as:

  • Front Nginx VPS: 1–2 vCPU, 1–2 GB RAM
  • Backend VPS: 2–4 vCPU, 4–8 GB RAM (depending on language/runtime and expected traffic)

You can adjust these based on your actual usage; our article on how to estimate CPU, RAM and bandwidth for a new website is a helpful reference when sizing.

2. Secure the basics on each VPS

Before opening ports to the world, make sure you have the fundamentals in place:

  • System fully updated (apt update && apt upgrade)
  • Non-root user with sudo access
  • SSH key authentication and password login disabled (or at least restricted)
  • Firewall allowing only necessary ports (80/443 on the front Nginx; app-specific ports between servers only)

If you want a concrete checklist, see our detailed post on what to do in the first 24 hours on a new VPS. For a deeper dive into hardening, we also maintain a friendly guide on securing a VPS server without leaving doors open.

3. Configure DNS

Point your domain (or subdomain) to the front Nginx VPS:

  • Create an A record (and AAAA if you use IPv6) for example.com and/or www.example.com to the front VPS IP.
  • Backend servers do not need public DNS records; they can remain internal.

Once DNS points to the front server, all incoming traffic will be routed via Nginx and then proxied to your backends.

Step‑by‑Step: Nginx as a Reverse Proxy to a Single Backend

First, we will configure Nginx to proxy traffic from the public internet to one backend application server. Later, we will turn this into a simple load balancer by adding more servers to the same upstream block.

1. Install Nginx on the front VPS

sudo apt update
sudo apt install nginx -y

Enable and start Nginx if it’s not already running:

sudo systemctl enable nginx
sudo systemctl start nginx

You should now see the default Nginx welcome page when visiting your server IP in a browser.

2. Define your backend upstream

Assume your backend application is running on 10.0.0.10:8000 (private network) or 192.0.2.11:8000 (public but firewalled to only accept connections from the front Nginx IP).

Create a new configuration file on the front server, for example:

sudo nano /etc/nginx/conf.d/app_upstream.conf

Add:

upstream app_backend {
    server 10.0.0.10:8000;
}

This upstream block defines a logical name (app_backend) that points to your application server. Nginx will use it when proxying requests.

3. Create the reverse proxy server block

Now create a server block that listens on port 80 (HTTP) and proxies traffic to app_backend. For now, we will focus on HTTP only; we will add HTTPS afterward.

sudo nano /etc/nginx/sites-available/example.com

Paste:

server {
    listen 80;
    server_name example.com www.example.com;

    # Redirect non-HTTPS to HTTPS (we'll enable SSL later)
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    # SSL certificates will be added later
    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Basic security & proxy headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;

    location / {
        proxy_pass http://app_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_read_timeout 60s;
        proxy_send_timeout 60s;
    }
}

For now, Nginx will fail to reload because the certificate paths do not exist yet. We’ll fix that in the next step.

If you want a deeper dive into HTTP security headers and why settings like X-Frame-Options and HSTS matter, you can read our dedicated guide on HTTP security headers and how to configure them safely.

4. Obtain a free Let’s Encrypt SSL certificate

Install Certbot and the Nginx plugin:

sudo apt install certbot python3-certbot-nginx -y

Run Certbot to obtain and configure the certificate:

sudo certbot --nginx -d example.com -d www.example.com

Certbot will configure SSL directives for you. If you prefer to keep your own server block layout, you can simply point the ssl_certificate and ssl_certificate_key to the paths Certbot creates under /etc/letsencrypt/live/.

Reload Nginx:

sudo nginx -t
sudo systemctl reload nginx

Your domain should now serve HTTPS traffic, and Nginx will proxy all requests to http://app_backend, which points to your backend server.

5. Configure the backend application to trust proxy headers

Most frameworks (Laravel, Symfony, Django, Express, Rails, etc.) need to be told which headers to trust so they can correctly detect the client IP and HTTPS status.

  • For PHP/Laravel, ensure APP_URL uses https:// and set trusted proxies (e.g. in Laravel’s TrustProxies middleware).
  • For Node.js/Express, set app.set('trust proxy', true) so it respects X-Forwarded-Proto and X-Forwarded-For.

Correct proxy configuration avoids issues like infinite redirect loops or all visitors appearing to have the same IP address (the Nginx server’s IP).

Turning the Reverse Proxy into a Simple Load Balancer

Once the reverse proxy works with a single backend, turning it into a basic load balancer is surprisingly easy: you simply add more server lines inside the same upstream block.

1. Add multiple backend servers to the upstream

Edit the upstream definition:

sudo nano /etc/nginx/conf.d/app_upstream.conf

Change:

upstream app_backend {
    server 10.0.0.10:8000;
}

To:

upstream app_backend {
    server 10.0.0.10:8000 max_fails=3 fail_timeout=30s;
    server 10.0.0.11:8000 max_fails=3 fail_timeout=30s;
}

By default, Nginx uses round-robin load balancing: each new request is sent to the next server in the list. The max_fails and fail_timeout parameters provide a simple health check: if a server fails too many times in a time window, Nginx stops sending traffic to it temporarily.

2. Optional: Choose a different load balancing method

Nginx supports several built-in algorithms for distributing requests:

  • Round-robin (default): Evenly rotates through servers. Good default choice.
  • least_conn: Sends new requests to the server with the fewest active connections. Useful for long-running requests (e.g. file uploads).
  • ip_hash: Keeps the same client IP bound to the same backend, which can help with basic session affinity (sticky sessions).

To enable least_conn:

upstream app_backend {
    least_conn;
    server 10.0.0.10:8000 max_fails=3 fail_timeout=30s;
    server 10.0.0.11:8000 max_fails=3 fail_timeout=30s;
}

To enable IP-based sticky sessions with ip_hash:

upstream app_backend {
    ip_hash;
    server 10.0.0.10:8000 max_fails=3 fail_timeout=30s;
    server 10.0.0.11:8000 max_fails=3 fail_timeout=30s;
}

Note that ip_hash has some limitations (e.g. it does not work well with dynamic server lists or weight parameters), but for small projects it is often enough to satisfy basic session stickiness requirements.

3. Reload Nginx and verify

Test and reload:

sudo nginx -t
sudo systemctl reload nginx

You can verify that traffic is hitting both backends by checking application logs on each backend server, or by exposing a simple status endpoint that shows which instance handled the request.

4. Dealing with sticky sessions (login, carts, etc.)

If your application relies heavily on in-memory sessions (for example, PHP sessions stored on local disk or Node.js sessions in memory), round-robin balancing can cause “random logouts” or inconsistent cart behaviour. To avoid this:

  • Use shared session storage (Redis, database) so any backend can handle any user.
  • Or use sticky sessions with ip_hash in Nginx, understanding it’s a basic solution tied to client IP.

For production WooCommerce or complex carts, we strongly recommend shared session storage plus caching. Our post on containerising WordPress with Nginx reverse proxy in front and our various WooCommerce performance guides go deeper into how to design these layers cleanly.

Routing Multiple Apps Behind One Nginx Reverse Proxy

A big advantage of this pattern is that you can host multiple services behind the same front Nginx, each on its own backend or port. Common scenarios:

  • SPA + API: A Vue/React frontend on one backend and an API (Laravel, Node.js) on another.
  • Admin vs public site: Admin panel and public site on separate servers for security or performance reasons.
  • Legacy + new app: Old application and new microservice running side by side.

1. Path-based routing

Example: / goes to the main PHP app, /api/ goes to a Node.js API.

upstream php_app {
    server 10.0.0.10:9000;
}

upstream node_api {
    server 10.0.0.20:3000;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    # SSL config omitted for brevity

    location /api/ {
        proxy_pass http://node_api/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location / {
        proxy_pass http://php_app/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

This pattern is very common for SPAs calling an internal API. If you are curious about the benefits of hosting SPA and API on the same domain name, we explored this in detail in our article on hosting single-page applications and APIs under one domain with Nginx routing.

2. Hostname-based routing

You can also route based on hostnames instead of paths. For example, app.example.com and api.example.com can each have their own server block with different proxy_pass targets, while still using the same front Nginx instance and IP address.

Performance Tuning: Timeouts, Caching and Logs

Even in a simple architecture, a bit of tuning goes a long way. Here are some practical settings we apply frequently for small projects.

1. Reasonable timeouts

Default Nginx timeouts are often too high or too low for real workloads. Some useful directives in your location block:

proxy_connect_timeout   5s;
proxy_send_timeout      60s;
proxy_read_timeout      60s;
send_timeout            60s;

If your app regularly needs longer than 60 seconds to respond, it’s usually better to move heavy work to background jobs rather than just increasing timeouts. Our write-up on why background jobs matter so much on a VPS gives practical patterns for moving slow tasks out of the request/response path.

2. Microcaching for dynamic sites

One of the most powerful tricks on small Nginx-based stacks is microcaching: caching dynamic responses for just 1–5 seconds. For many workloads (news pages, product listings, homepages), this can cut backend load by 50–90% with almost no risk of serving stale content for too long.

At a high level, you define a small cache zone and enable it on the relevant locations:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1g inactive=60s use_temp_path=off;

server {
    # ...
    location / {
        proxy_cache microcache;
        proxy_cache_valid 200 1s;
        proxy_cache_valid 301 302 10s;
        proxy_cache_valid any 0s;
        add_header X-Cache-Status $upstream_cache_status;

        proxy_pass http://app_backend;
        # proxy headers
    }
}

We have a full, dedicated guide on this pattern in our article about Nginx microcaching and how 1–5 second caches can make PHP apps feel instant. It includes details on cache bypassing, purging, and avoiding issues with logged-in users.

3. Logging and observability

Nginx access and error logs are your first line of insight into what’s happening:

  • Monitor /var/log/nginx/access.log for status codes, response times, peaks.
  • Monitor /var/log/nginx/error.log for upstream timeouts, connection errors, and misconfigurations.

Add $upstream_response_time and $request_time to your log format to see how much time is spent in Nginx vs the backend. For larger setups, we often ship these logs into a central system (Loki, ELK, etc.), but for small projects, even basic log review plus tools like goaccess or simple shell filters can reveal a lot.

If you want to understand web server logs more deeply, we covered this step-by-step in our guide to reading web server logs and diagnosing 4xx–5xx errors on Apache and Nginx.

Hosting Different Stacks Behind Nginx

One strength of this architecture is that Nginx doesn’t care what technology your backend uses—as long as it speaks HTTP. A few concrete examples we see often at dchost.com:

  • PHP (Laravel, Symfony, WordPress): Nginx proxies to PHP-FPM running locally on a backend VPS.
  • Node.js (Express, NestJS, Next.js): Nginx proxies to one or more Node.js processes managed with PM2 or systemd.
  • Python (Django, Flask, FastAPI): Nginx proxies to Gunicorn/Uvicorn workers bound to localhost.

For a detailed, real-world Node.js example behind Nginx, you can check our article on hosting Node.js in production with PM2, Nginx, SSL, and zero‑downtime deploys. The reverse proxy layer is the same pattern we’ve been describing here.

When to Evolve Beyond This Simple Architecture

The Nginx reverse proxy + simple load balancer pattern scales surprisingly far for small and medium projects, but there are clear signs when you should consider the next step:

  • Single front VPS becomes a bottleneck: CPU, RAM, or network usage on the Nginx front server stays high, even after tuning and microcaching.
  • Need for high availability: You cannot accept a single point of failure on the front proxy, and you want automatic failover between multiple front nodes.
  • Complex routing and security rules: Many apps, custom WAF rules, geo‑routing, or advanced rate limiting might require a more specialised setup.
  • Multi‑region or multi‑data center deployments: You want users automatically routed to the closest region with DNS‑level balancing.

At that stage, you might introduce a second Nginx front with anycast or DNS failover, or look at dedicated load balancers or Kubernetes-based approaches. We also help customers move from classic VPS stacks into more advanced clusters when the time is right, without forcing premature complexity on small projects.

Summary and How dchost.com Fits In

A dedicated Nginx reverse proxy and simple load balancer in front of your application servers is one of those architectures that “just works” for a long time. You centralise SSL, control routing in one place, gain the ability to add or remove backend servers, and open the door to microcaching and fine-grained security headers—all while keeping the design understandable for a small team.

The steps we covered—preparing secure VPS servers, defining upstream blocks, configuring reverse proxy server blocks, adding basic load balancing, and adding small performance optimisations—are exactly what we apply day-to-day for small businesses and SaaS projects hosted on dchost.com. You can start with a single VPS, split out Nginx to a front server when traffic warrants it, and then incrementally add more backends or caching as your needs grow.

If you are planning a new project or want to refactor an existing “all-in-one” server into a cleaner architecture, our team can help you choose the right combination of VPS, dedicated servers, or colocation in our data centers and design a practical Nginx-based stack that fits your budget and growth plans. When you are ready, reach out to us at dchost.com and we will be happy to translate this blueprint into a production-ready deployment tailored to your workload.

Frequently Asked Questions

You can absolutely start with a single VPS where Nginx, your application, and the database all live together. For very small projects or early prototypes, this is the most cost‑effective and simplest option. The architecture in this article still applies: Nginx can proxy to backends running on different ports on the same machine. As traffic grows or you need cleaner isolation, you can move Nginx to its own front VPS and point the upstreams to separate backend servers instead. The nice part is that the Nginx configuration changes very little when you split things out later, so you are not locking yourself into a different pattern.

For most small projects, the Nginx reverse proxy itself is very lightweight. A front VPS with 1–2 vCPU and 1–2 GB RAM is usually sufficient, even at tens of thousands of requests per day, especially if you enable microcaching. The heavier work usually happens on the backend VPS: PHP, Node.js, or database processes consume more CPU and memory. A common starting point we see is 2–4 vCPU and 4–8 GB RAM for the backend, and a smaller front server. You can monitor CPU, RAM, and load on both sides and adjust as needed; upgrading the backend first often brings the biggest benefit.

The most straightforward way is to use Let’s Encrypt with Certbot directly on the front Nginx VPS. You point your domain’s DNS A/AAAA records to the front server, install Certbot and its Nginx plugin, and run a command like "certbot --nginx -d example.com -d www.example.com". Certbot will obtain a free certificate, configure the SSL directives in your Nginx server block, and set up automatic renewal. Your backend servers then only speak plain HTTP over a private or firewalled network, while Nginx handles all TLS termination and renewals centrally. This keeps certificate management in one place and simplifies backend configuration.

The most robust approach is to move session storage out of local memory or disk and into a shared store such as Redis or a database table. That way any backend can serve any request for a logged‑in user or shopping cart without issues. If you cannot change session storage right away, Nginx offers a basic sticky solution via the ip_hash directive in the upstream block, which pins each client IP to the same backend. This works for many small projects but has limitations, especially with users behind large NAT gateways. For serious e‑commerce or SaaS workloads, we strongly recommend shared session storage plus a cache layer.

Yes. Nginx is agnostic to the backend language; it only cares that the backend speaks HTTP. You can define multiple upstream blocks—one for PHP-FPM, one for Node.js (managed by PM2 or systemd), one for a Python stack like Django or FastAPI—and route different paths or hostnames to each. For example, /api could go to a Node.js backend, /admin to a Django service, and the main site to a PHP application. This is a very common pattern we deploy for customers who are gradually migrating from one stack to another, or who need separate services for public sites, internal tools, and APIs behind the same domain.