{"id":3523,"date":"2025-12-27T19:48:08","date_gmt":"2025-12-27T16:48:08","guid":{"rendered":"https:\/\/www.dchost.com\/blog\/nginx-reverse-proxy-and-simple-load-balancer-setup-for-small-projects\/"},"modified":"2025-12-27T19:48:08","modified_gmt":"2025-12-27T16:48:08","slug":"nginx-reverse-proxy-and-simple-load-balancer-setup-for-small-projects","status":"publish","type":"post","link":"https:\/\/www.dchost.com\/blog\/en\/nginx-reverse-proxy-and-simple-load-balancer-setup-for-small-projects\/","title":{"rendered":"Nginx Reverse Proxy and Simple Load Balancer Setup for Small Projects"},"content":{"rendered":"<div class=\"dchost-blog-content-wrapper\"><p>For many small projects, the first deployment runs on a single <a href=\"https:\/\/www.dchost.com\/vps\">VPS<\/a>: web server, application, and database all on one machine. It works well at the start, but as traffic grows or you add more services (API, admin panel, background workers), things become harder to manage. <a href=\"https:\/\/www.dchost.com\/ssl\">SSL certificate<\/a>s are scattered, firewall rules get complex, and you have no easy way to add a second application server when you need more capacity. This is exactly where a lightweight Nginx reverse proxy and simple load balancer architecture solves real, everyday problems without forcing you into heavyweight, enterprise-style setups.<\/p>\n<p>In this guide, we will walk through a practical, step-by-step Nginx configuration that we use frequently for dchost.com customers running small SaaS apps, WooCommerce stores, landing pages, or internal tools. We will keep the design intentionally simple: one front-facing Nginx reverse proxy and one or more backend application servers. You will learn why this pattern works so well for small projects, how to configure it on a VPS, and how to extend it into a basic load balancer when you need more performance or redundancy.<\/p>\n<div id=\"toc_container\" class=\"toc_transparent no_bullets\"><p class=\"toc_title\">\u0130&ccedil;indekiler<\/p><ul class=\"toc_list\"><li><a href=\"#What_a_Nginx_Reverse_Proxy_and_Load_Balancer_Actually_Do_for_You\"><span class=\"toc_number toc_depth_1\">1<\/span> What a Nginx Reverse Proxy and Load Balancer Actually Do for You<\/a><\/li><li><a href=\"#Reference_Architecture_for_Small_Projects\"><span class=\"toc_number toc_depth_1\">2<\/span> Reference Architecture for Small Projects<\/a><\/li><li><a href=\"#Preparing_Your_Servers_OS_Security_and_DNS\"><span class=\"toc_number toc_depth_1\">3<\/span> Preparing Your Servers: OS, Security and DNS<\/a><ul><li><a href=\"#1_Provision_your_servers\"><span class=\"toc_number toc_depth_2\">3.1<\/span> 1. Provision your servers<\/a><\/li><li><a href=\"#2_Secure_the_basics_on_each_VPS\"><span class=\"toc_number toc_depth_2\">3.2<\/span> 2. Secure the basics on each VPS<\/a><\/li><li><a href=\"#3_Configure_DNS\"><span class=\"toc_number toc_depth_2\">3.3<\/span> 3. Configure DNS<\/a><\/li><\/ul><\/li><li><a href=\"#StepbyStep_Nginx_as_a_Reverse_Proxy_to_a_Single_Backend\"><span class=\"toc_number toc_depth_1\">4<\/span> Step\u2011by\u2011Step: Nginx as a Reverse Proxy to a Single Backend<\/a><ul><li><a href=\"#1_Install_Nginx_on_the_front_VPS\"><span class=\"toc_number toc_depth_2\">4.1<\/span> 1. Install Nginx on the front VPS<\/a><\/li><li><a href=\"#2_Define_your_backend_upstream\"><span class=\"toc_number toc_depth_2\">4.2<\/span> 2. Define your backend upstream<\/a><\/li><li><a href=\"#3_Create_the_reverse_proxy_server_block\"><span class=\"toc_number toc_depth_2\">4.3<\/span> 3. Create the reverse proxy server block<\/a><\/li><li><a href=\"#4_Obtain_a_free_Lets_Encrypt_SSL_certificate\"><span class=\"toc_number toc_depth_2\">4.4<\/span> 4. Obtain a free Let\u2019s Encrypt SSL certificate<\/a><\/li><li><a href=\"#5_Configure_the_backend_application_to_trust_proxy_headers\"><span class=\"toc_number toc_depth_2\">4.5<\/span> 5. Configure the backend application to trust proxy headers<\/a><\/li><\/ul><\/li><li><a href=\"#Turning_the_Reverse_Proxy_into_a_Simple_Load_Balancer\"><span class=\"toc_number toc_depth_1\">5<\/span> Turning the Reverse Proxy into a Simple Load Balancer<\/a><ul><li><a href=\"#1_Add_multiple_backend_servers_to_the_upstream\"><span class=\"toc_number toc_depth_2\">5.1<\/span> 1. Add multiple backend servers to the upstream<\/a><\/li><li><a href=\"#2_Optional_Choose_a_different_load_balancing_method\"><span class=\"toc_number toc_depth_2\">5.2<\/span> 2. Optional: Choose a different load balancing method<\/a><\/li><li><a href=\"#3_Reload_Nginx_and_verify\"><span class=\"toc_number toc_depth_2\">5.3<\/span> 3. Reload Nginx and verify<\/a><\/li><li><a href=\"#4_Dealing_with_sticky_sessions_login_carts_etc\"><span class=\"toc_number toc_depth_2\">5.4<\/span> 4. Dealing with sticky sessions (login, carts, etc.)<\/a><\/li><\/ul><\/li><li><a href=\"#Routing_Multiple_Apps_Behind_One_Nginx_Reverse_Proxy\"><span class=\"toc_number toc_depth_1\">6<\/span> Routing Multiple Apps Behind One Nginx Reverse Proxy<\/a><ul><li><a href=\"#1_Path-based_routing\"><span class=\"toc_number toc_depth_2\">6.1<\/span> 1. Path-based routing<\/a><\/li><li><a href=\"#2_Hostname-based_routing\"><span class=\"toc_number toc_depth_2\">6.2<\/span> 2. Hostname-based routing<\/a><\/li><\/ul><\/li><li><a href=\"#Performance_Tuning_Timeouts_Caching_and_Logs\"><span class=\"toc_number toc_depth_1\">7<\/span> Performance Tuning: Timeouts, Caching and Logs<\/a><ul><li><a href=\"#1_Reasonable_timeouts\"><span class=\"toc_number toc_depth_2\">7.1<\/span> 1. Reasonable timeouts<\/a><\/li><li><a href=\"#2_Microcaching_for_dynamic_sites\"><span class=\"toc_number toc_depth_2\">7.2<\/span> 2. Microcaching for dynamic sites<\/a><\/li><li><a href=\"#3_Logging_and_observability\"><span class=\"toc_number toc_depth_2\">7.3<\/span> 3. Logging and observability<\/a><\/li><\/ul><\/li><li><a href=\"#Hosting_Different_Stacks_Behind_Nginx\"><span class=\"toc_number toc_depth_1\">8<\/span> Hosting Different Stacks Behind Nginx<\/a><\/li><li><a href=\"#When_to_Evolve_Beyond_This_Simple_Architecture\"><span class=\"toc_number toc_depth_1\">9<\/span> When to Evolve Beyond This Simple Architecture<\/a><\/li><li><a href=\"#Summary_and_How_dchostcom_Fits_In\"><span class=\"toc_number toc_depth_1\">10<\/span> Summary and How dchost.com Fits In<\/a><\/li><\/ul><\/div>\n<h2><span id=\"What_a_Nginx_Reverse_Proxy_and_Load_Balancer_Actually_Do_for_You\">What a Nginx Reverse Proxy and Load Balancer Actually Do for You<\/span><\/h2>\n<p>Before touching any configuration files, it helps to be very clear about what role Nginx will play in this architecture. On your front server, Nginx will act as:<\/p>\n<ul>\n<li><strong>Reverse proxy:<\/strong> Accepts HTTP\/HTTPS requests from the internet and forwards them to one or more internal application servers.<\/li>\n<li><strong>SSL terminator:<\/strong> Handles TLS\/SSL certificates, so backends can speak plain HTTP on a private network.<\/li>\n<li><strong>Router:<\/strong> Sends different paths or hostnames to different backends (e.g. \/api to one server, \/app to another).<\/li>\n<li><strong>Simple load balancer:<\/strong> Distributes requests across multiple backend servers using built-in algorithms.<\/li>\n<\/ul>\n<p>This architecture gives you several concrete benefits:<\/p>\n<ul>\n<li><strong>Centralised SSL management:<\/strong> You only manage certificates on the front Nginx server, instead of repeating the work on each app box.<\/li>\n<li><strong>Cleaner deployments:<\/strong> You can upgrade or replace backend servers without changing public DNS or touching client-side configs.<\/li>\n<li><strong>Easier scaling:<\/strong> When CPU or RAM becomes tight, you can add a second backend and let Nginx distribute the load.<\/li>\n<li><strong>Better separation of concerns:<\/strong> The front server focuses on HTTP, security headers, and caching; backend servers focus on PHP, Node.js, or other application runtimes.<\/li>\n<\/ul>\n<p>If you are already using Nginx directly on a single VPS for your app, this guide will simply move that Nginx layer to a dedicated front server and wire it up cleanly to your backends.<\/p>\n<h2><span id=\"Reference_Architecture_for_Small_Projects\">Reference Architecture for Small Projects<\/span><\/h2>\n<p>Let\u2019s define a concrete, minimal architecture we\u2019ll build in this article.<\/p>\n<ul>\n<li><strong>Front VPS:<\/strong> Public-facing Nginx reverse proxy and load balancer. Has a public IP and DNS A\/AAAA records for your domain. Handles SSL.<\/li>\n<li><strong>Backend VPS #1:<\/strong> Runs your main application (e.g. PHP-FPM, Node.js, Python, Ruby). Accessible from the front VPS over a private network or firewall-restricted public IP.<\/li>\n<li><strong>Optional Backend VPS #2:<\/strong> Identical or similar app server to share load or act as failover.<\/li>\n<li><strong>Database server:<\/strong> May live on Backend #1 for very small setups, or on a separate VPS\/<a href=\"https:\/\/www.dchost.com\/dedicated-server\">dedicated server<\/a> when you grow.<\/li>\n<\/ul>\n<p>At dchost.com we see this pattern a lot for:<\/p>\n<ul>\n<li>New SaaS products that need a clean path to scale without re-architecture.<\/li>\n<li>WooCommerce or custom e\u2011commerce sites preparing for future traffic spikes.<\/li>\n<li>Internal dashboards and APIs where you want to isolate public traffic from the app machines.<\/li>\n<\/ul>\n<p>If you are still at the \u201cone VPS for everything\u201d stage, you can apply the same concepts on a single machine (Nginx reverse proxy in front of multiple local services). Later, migrating to separate backend servers becomes much easier.<\/p>\n<h2><span id=\"Preparing_Your_Servers_OS_Security_and_DNS\">Preparing Your Servers: OS, Security and DNS<\/span><\/h2>\n<p>We will assume Ubuntu 22.04 or Debian 12 on all servers, but the Nginx configuration is almost identical on other modern Linux distributions.<\/p>\n<h3><span id=\"1_Provision_your_servers\">1. Provision your servers<\/span><\/h3>\n<p>You will need at least:<\/p>\n<ul>\n<li>1 VPS for the <strong>front Nginx reverse proxy<\/strong><\/li>\n<li>1 VPS for the <strong>backend application<\/strong>, ideally on the same data center or region for low latency<\/li>\n<\/ul>\n<p>For most small projects, we\u2019ve seen a good starting point as:<\/p>\n<ul>\n<li><strong>Front Nginx VPS:<\/strong> 1\u20132 vCPU, 1\u20132 GB RAM<\/li>\n<li><strong>Backend VPS:<\/strong> 2\u20134 vCPU, 4\u20138 GB RAM (depending on language\/runtime and expected traffic)<\/li>\n<\/ul>\n<p>You can adjust these based on your actual usage; our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/yeni-web-sitesi-icin-cpu-ram-ve-trafik-nasil-hesaplanir\/\">how to estimate CPU, RAM and bandwidth for a new website<\/a> is a helpful reference when sizing.<\/p>\n<h3><span id=\"2_Secure_the_basics_on_each_VPS\">2. Secure the basics on each VPS<\/span><\/h3>\n<p>Before opening ports to the world, make sure you have the fundamentals in place:<\/p>\n<ul>\n<li>System fully updated (<code>apt update &amp;&amp; apt upgrade<\/code>)<\/li>\n<li>Non-root user with sudo access<\/li>\n<li>SSH key authentication and password login disabled (or at least restricted)<\/li>\n<li>Firewall allowing only necessary ports (80\/443 on the front Nginx; app-specific ports between servers only)<\/li>\n<\/ul>\n<p>If you want a concrete checklist, see our detailed post <a href=\"https:\/\/www.dchost.com\/blog\/en\/yeni-vpste-ilk-24-saat-guncelleme-guvenlik-duvari-ve-kullanici-hesaplari\/\">on what to do in the first 24 hours on a new VPS<\/a>. For a deeper dive into hardening, we also maintain a friendly guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-sunucu-guvenligi-nasil-saglanir-kapiyi-acik-birakmadan-yasamanin-sirri\/\">securing a VPS server without leaving doors open<\/a>.<\/p>\n<h3><span id=\"3_Configure_DNS\">3. Configure DNS<\/span><\/h3>\n<p>Point your domain (or subdomain) to the front Nginx VPS:<\/p>\n<ul>\n<li>Create an <strong>A<\/strong> record (and <strong>AAAA<\/strong> if you use IPv6) for <code>example.com<\/code> and\/or <code>www.example.com<\/code> to the front VPS IP.<\/li>\n<li>Backend servers do <strong>not<\/strong> need public DNS records; they can remain internal.<\/li>\n<\/ul>\n<p>Once DNS points to the front server, all incoming traffic will be routed via Nginx and then proxied to your backends.<\/p>\n<h2><span id=\"StepbyStep_Nginx_as_a_Reverse_Proxy_to_a_Single_Backend\">Step\u2011by\u2011Step: Nginx as a Reverse Proxy to a Single Backend<\/span><\/h2>\n<p>First, we will configure Nginx to proxy traffic from the public internet to one backend application server. Later, we will turn this into a simple load balancer by adding more servers to the same upstream block.<\/p>\n<h3><span id=\"1_Install_Nginx_on_the_front_VPS\">1. Install Nginx on the front VPS<\/span><\/h3>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo apt update\nsudo apt install nginx -y\n<\/code><\/pre>\n<p>Enable and start Nginx if it\u2019s not already running:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo systemctl enable nginx\nsudo systemctl start nginx\n<\/code><\/pre>\n<p>You should now see the default Nginx welcome page when visiting your server IP in a browser.<\/p>\n<h3><span id=\"2_Define_your_backend_upstream\">2. Define your backend upstream<\/span><\/h3>\n<p>Assume your backend application is running on <code>10.0.0.10:8000<\/code> (private network) or <code>192.0.2.11:8000<\/code> (public but firewalled to only accept connections from the front Nginx IP).<\/p>\n<p>Create a new configuration file on the front server, for example:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo nano \/etc\/nginx\/conf.d\/app_upstream.conf\n<\/code><\/pre>\n<p>Add:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">upstream app_backend {\n    server 10.0.0.10:8000;\n}\n<\/code><\/pre>\n<p>This <code>upstream<\/code> block defines a logical name (<code>app_backend<\/code>) that points to your application server. Nginx will use it when proxying requests.<\/p>\n<h3><span id=\"3_Create_the_reverse_proxy_server_block\">3. Create the reverse proxy server block<\/span><\/h3>\n<p>Now create a <code>server<\/code> block that listens on port 80 (HTTP) and proxies traffic to <code>app_backend<\/code>. For now, we will focus on HTTP only; we will add HTTPS afterward.<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo nano \/etc\/nginx\/sites-available\/example.com\n<\/code><\/pre>\n<p>Paste:<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">server {\n    listen 80;\n    server_name example.com www.example.com;\n\n    # Redirect non-HTTPS to HTTPS (we'll enable SSL later)\n    return 301 https:\/\/$host$request_uri;\n}\n\nserver {\n    listen 443 ssl http2;\n    server_name example.com www.example.com;\n\n    # SSL certificates will be added later\n    ssl_certificate     \/etc\/letsencrypt\/live\/example.com\/fullchain.pem;\n    ssl_certificate_key \/etc\/letsencrypt\/live\/example.com\/privkey.pem;\n\n    # Basic security &amp; proxy headers\n    add_header X-Frame-Options &quot;SAMEORIGIN&quot; always;\n    add_header X-Content-Type-Options &quot;nosniff&quot; always;\n    add_header X-XSS-Protection &quot;1; mode=block&quot; always;\n\n    location \/ {\n        proxy_pass http:\/\/app_backend;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n\n        proxy_http_version 1.1;\n        proxy_set_header Connection &quot;&quot;;\n        proxy_read_timeout 60s;\n        proxy_send_timeout 60s;\n    }\n}\n<\/code><\/pre>\n<p>For now, Nginx will fail to reload because the certificate paths do not exist yet. We\u2019ll fix that in the next step.<\/p>\n<p>If you want a deeper dive into HTTP security headers and why settings like <code>X-Frame-Options<\/code> and HSTS matter, you can read our dedicated guide on <a href=\"https:\/\/www.dchost.com\/blog\/en\/http-guvenlik-basliklari-rehberi-hsts-csp-x-frame-options-ve-referrer-policy-dogru-nasil-kurulur\/\">HTTP security headers and how to configure them safely<\/a>.<\/p>\n<h3><span id=\"4_Obtain_a_free_Lets_Encrypt_SSL_certificate\">4. Obtain a free Let\u2019s Encrypt SSL certificate<\/span><\/h3>\n<p>Install Certbot and the Nginx plugin:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo apt install certbot python3-certbot-nginx -y\n<\/code><\/pre>\n<p>Run Certbot to obtain and configure the certificate:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo certbot --nginx -d example.com -d www.example.com\n<\/code><\/pre>\n<p>Certbot will configure SSL directives for you. If you prefer to keep your own server block layout, you can simply point the <code>ssl_certificate<\/code> and <code>ssl_certificate_key<\/code> to the paths Certbot creates under <code>\/etc\/letsencrypt\/live\/<\/code>.<\/p>\n<p>Reload Nginx:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo nginx -t\nsudo systemctl reload nginx\n<\/code><\/pre>\n<p>Your domain should now serve HTTPS traffic, and Nginx will proxy all requests to <code>http:\/\/app_backend<\/code>, which points to your backend server.<\/p>\n<h3><span id=\"5_Configure_the_backend_application_to_trust_proxy_headers\">5. Configure the backend application to trust proxy headers<\/span><\/h3>\n<p>Most frameworks (Laravel, Symfony, Django, Express, Rails, etc.) need to be told which headers to trust so they can correctly detect the client IP and HTTPS status.<\/p>\n<ul>\n<li>For PHP\/Laravel, ensure <code>APP_URL<\/code> uses <code>https:\/\/<\/code> and set trusted proxies (e.g. in Laravel\u2019s <code>TrustProxies<\/code> middleware).<\/li>\n<li>For Node.js\/Express, set <code>app.set('trust proxy', true)<\/code> so it respects <code>X-Forwarded-Proto<\/code> and <code>X-Forwarded-For<\/code>.<\/li>\n<\/ul>\n<p>Correct proxy configuration avoids issues like infinite redirect loops or all visitors appearing to have the same IP address (the Nginx server\u2019s IP).<\/p>\n<h2><span id=\"Turning_the_Reverse_Proxy_into_a_Simple_Load_Balancer\">Turning the Reverse Proxy into a Simple Load Balancer<\/span><\/h2>\n<p>Once the reverse proxy works with a single backend, turning it into a basic load balancer is surprisingly easy: you simply add more <code>server<\/code> lines inside the same <code>upstream<\/code> block.<\/p>\n<h3><span id=\"1_Add_multiple_backend_servers_to_the_upstream\">1. Add multiple backend servers to the upstream<\/span><\/h3>\n<p>Edit the upstream definition:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo nano \/etc\/nginx\/conf.d\/app_upstream.conf\n<\/code><\/pre>\n<p>Change:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">upstream app_backend {\n    server 10.0.0.10:8000;\n}\n<\/code><\/pre>\n<p>To:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">upstream app_backend {\n    server 10.0.0.10:8000 max_fails=3 fail_timeout=30s;\n    server 10.0.0.11:8000 max_fails=3 fail_timeout=30s;\n}\n<\/code><\/pre>\n<p>By default, Nginx uses <strong>round-robin<\/strong> load balancing: each new request is sent to the next server in the list. The <code>max_fails<\/code> and <code>fail_timeout<\/code> parameters provide a simple health check: if a server fails too many times in a time window, Nginx stops sending traffic to it temporarily.<\/p>\n<h3><span id=\"2_Optional_Choose_a_different_load_balancing_method\">2. Optional: Choose a different load balancing method<\/span><\/h3>\n<p>Nginx supports several built-in algorithms for distributing requests:<\/p>\n<ul>\n<li><strong>Round-robin (default):<\/strong> Evenly rotates through servers. Good default choice.<\/li>\n<li><strong>least_conn:<\/strong> Sends new requests to the server with the fewest active connections. Useful for long-running requests (e.g. file uploads).<\/li>\n<li><strong>ip_hash:<\/strong> Keeps the same client IP bound to the same backend, which can help with basic session affinity (sticky sessions).<\/li>\n<\/ul>\n<p>To enable <code>least_conn<\/code>:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">upstream app_backend {\n    least_conn;\n    server 10.0.0.10:8000 max_fails=3 fail_timeout=30s;\n    server 10.0.0.11:8000 max_fails=3 fail_timeout=30s;\n}\n<\/code><\/pre>\n<p>To enable IP-based sticky sessions with <code>ip_hash<\/code>:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">upstream app_backend {\n    ip_hash;\n    server 10.0.0.10:8000 max_fails=3 fail_timeout=30s;\n    server 10.0.0.11:8000 max_fails=3 fail_timeout=30s;\n}\n<\/code><\/pre>\n<p>Note that <code>ip_hash<\/code> has some limitations (e.g. it does not work well with dynamic server lists or weight parameters), but for small projects it is often enough to satisfy basic session stickiness requirements.<\/p>\n<h3><span id=\"3_Reload_Nginx_and_verify\">3. Reload Nginx and verify<\/span><\/h3>\n<p>Test and reload:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">sudo nginx -t\nsudo systemctl reload nginx\n<\/code><\/pre>\n<p>You can verify that traffic is hitting both backends by checking application logs on each backend server, or by exposing a simple status endpoint that shows which instance handled the request.<\/p>\n<h3><span id=\"4_Dealing_with_sticky_sessions_login_carts_etc\">4. Dealing with sticky sessions (login, carts, etc.)<\/span><\/h3>\n<p>If your application relies heavily on in-memory sessions (for example, PHP sessions stored on local disk or Node.js sessions in memory), round-robin balancing can cause \u201crandom logouts\u201d or inconsistent cart behaviour. To avoid this:<\/p>\n<ul>\n<li>Use <strong>shared session storage<\/strong> (Redis, database) so any backend can handle any user.<\/li>\n<li>Or use <strong>sticky sessions<\/strong> with <code>ip_hash<\/code> in Nginx, understanding it\u2019s a basic solution tied to client IP.<\/li>\n<\/ul>\n<p>For production WooCommerce or complex carts, we strongly recommend shared session storage plus caching. Our post on <a href=\"https:\/\/www.dchost.com\/blog\/en\/wordpressi-docker-ile-konteynerize-etmek-tek-vpste-traefik-nginx-reverse-proxy-ile-uretim-mimarisi-nasil-kurulur\/\">containerising WordPress with Nginx reverse proxy in front<\/a> and our various WooCommerce performance guides go deeper into how to design these layers cleanly.<\/p>\n<h2><span id=\"Routing_Multiple_Apps_Behind_One_Nginx_Reverse_Proxy\">Routing Multiple Apps Behind One Nginx Reverse Proxy<\/span><\/h2>\n<p>A big advantage of this pattern is that you can host multiple services behind the same front Nginx, each on its own backend or port. Common scenarios:<\/p>\n<ul>\n<li><strong>SPA + API:<\/strong> A Vue\/React frontend on one backend and an API (Laravel, Node.js) on another.<\/li>\n<li><strong>Admin vs public site:<\/strong> Admin panel and public site on separate servers for security or performance reasons.<\/li>\n<li><strong>Legacy + new app:<\/strong> Old application and new microservice running side by side.<\/li>\n<\/ul>\n<h3><span id=\"1_Path-based_routing\">1. Path-based routing<\/span><\/h3>\n<p>Example: <code>\/<\/code> goes to the main PHP app, <code>\/api\/<\/code> goes to a Node.js API.<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">upstream php_app {\n    server 10.0.0.10:9000;\n}\n\nupstream node_api {\n    server 10.0.0.20:3000;\n}\n\nserver {\n    listen 443 ssl http2;\n    server_name example.com;\n\n    # SSL config omitted for brevity\n\n    location \/api\/ {\n        proxy_pass http:\/\/node_api\/;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n    }\n\n    location \/ {\n        proxy_pass http:\/\/php_app\/;\n        proxy_set_header Host $host;\n        proxy_set_header X-Real-IP $remote_addr;\n        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n        proxy_set_header X-Forwarded-Proto $scheme;\n    }\n}\n<\/code><\/pre>\n<p>This pattern is very common for SPAs calling an internal API. If you are curious about the benefits of hosting SPA and API on the same domain name, we explored this in detail in our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/react-vue-ve-angular-single-page-applicationlari-ayni-alan-adinda-api-ile-host-etmek-nginx-yonlendirme-ve-ssl-mimarisi\/\">hosting single-page applications and APIs under one domain with Nginx routing<\/a>.<\/p>\n<h3><span id=\"2_Hostname-based_routing\">2. Hostname-based routing<\/span><\/h3>\n<p>You can also route based on <strong>hostnames<\/strong> instead of paths. For example, <code>app.example.com<\/code> and <code>api.example.com<\/code> can each have their own <code>server<\/code> block with different <code>proxy_pass<\/code> targets, while still using the same front Nginx instance and IP address.<\/p>\n<h2><span id=\"Performance_Tuning_Timeouts_Caching_and_Logs\">Performance Tuning: Timeouts, Caching and Logs<\/span><\/h2>\n<p>Even in a simple architecture, a bit of tuning goes a long way. Here are some practical settings we apply frequently for small projects.<\/p>\n<h3><span id=\"1_Reasonable_timeouts\">1. Reasonable timeouts<\/span><\/h3>\n<p>Default Nginx timeouts are often too high or too low for real workloads. Some useful directives in your <code>location<\/code> block:<\/p>\n<pre class=\"language-bash line-numbers\"><code class=\"language-bash\">proxy_connect_timeout   5s;\nproxy_send_timeout      60s;\nproxy_read_timeout      60s;\nsend_timeout            60s;\n<\/code><\/pre>\n<p>If your app regularly needs longer than 60 seconds to respond, it\u2019s usually better to move heavy work to background jobs rather than just increasing timeouts. Our write-up on <a href=\"https:\/\/www.dchost.com\/blog\/en\/vps-uzerinde-arka-plan-isleri-ve-kuyruk-yonetimi-laravel-queue-supervisor-systemd-ve-pm2\/\">why background jobs matter so much on a VPS<\/a> gives practical patterns for moving slow tasks out of the request\/response path.<\/p>\n<h3><span id=\"2_Microcaching_for_dynamic_sites\">2. Microcaching for dynamic sites<\/span><\/h3>\n<p>One of the most powerful tricks on small Nginx-based stacks is <strong>microcaching<\/strong>: caching dynamic responses for just 1\u20135 seconds. For many workloads (news pages, product listings, homepages), this can cut backend load by 50\u201390% with almost no risk of serving stale content for too long.<\/p>\n<p>At a high level, you define a small cache zone and enable it on the relevant locations:<\/p>\n<pre class=\"language-nginx line-numbers\"><code class=\"language-nginx\">proxy_cache_path \/var\/cache\/nginx levels=1:2 keys_zone=microcache:10m max_size=1g inactive=60s use_temp_path=off;\n\nserver {\n    # ...\n    location \/ {\n        proxy_cache microcache;\n        proxy_cache_valid 200 1s;\n        proxy_cache_valid 301 302 10s;\n        proxy_cache_valid any 0s;\n        add_header X-Cache-Status $upstream_cache_status;\n\n        proxy_pass http:\/\/app_backend;\n        # proxy headers\n    }\n}\n<\/code><\/pre>\n<p>We have a full, dedicated guide on this pattern in our article <a href=\"https:\/\/www.dchost.com\/blog\/en\/nginx-mikro-onbellekleme-ile-php-uygulamalarini-ucurmak-1-5-sn-cache-bypass-ve-purge-ne-zaman-nasil\/\">about Nginx microcaching and how 1\u20135 second caches can make PHP apps feel instant<\/a>. It includes details on cache bypassing, purging, and avoiding issues with logged-in users.<\/p>\n<h3><span id=\"3_Logging_and_observability\">3. Logging and observability<\/span><\/h3>\n<p>Nginx access and error logs are your first line of insight into what\u2019s happening:<\/p>\n<ul>\n<li>Monitor <code>\/var\/log\/nginx\/access.log<\/code> for status codes, response times, peaks.<\/li>\n<li>Monitor <code>\/var\/log\/nginx\/error.log<\/code> for upstream timeouts, connection errors, and misconfigurations.<\/li>\n<\/ul>\n<p>Add <code>$upstream_response_time<\/code> and <code>$request_time<\/code> to your log format to see how much time is spent in Nginx vs the backend. For larger setups, we often ship these logs into a central system (Loki, ELK, etc.), but for small projects, even basic log review plus tools like <code>goaccess<\/code> or simple shell filters can reveal a lot.<\/p>\n<p>If you want to understand web server logs more deeply, we covered this step-by-step in <a href=\"https:\/\/www.dchost.com\/blog\/en\/hosting-sunucu-loglarini-okumayi-ogrenin-apache-ve-nginx-ile-4xx-5xx-hatalarini-teshis-rehberi\/\">our guide to reading web server logs and diagnosing 4xx\u20135xx errors on Apache and Nginx<\/a>.<\/p>\n<h2><span id=\"Hosting_Different_Stacks_Behind_Nginx\">Hosting Different Stacks Behind Nginx<\/span><\/h2>\n<p>One strength of this architecture is that Nginx doesn\u2019t care what technology your backend uses\u2014as long as it speaks HTTP. A few concrete examples we see often at dchost.com:<\/p>\n<ul>\n<li><strong>PHP (Laravel, Symfony, WordPress):<\/strong> Nginx proxies to PHP-FPM running locally on a backend VPS.<\/li>\n<li><strong>Node.js (Express, NestJS, Next.js):<\/strong> Nginx proxies to one or more Node.js processes managed with PM2 or systemd.<\/li>\n<li><strong>Python (Django, Flask, FastAPI):<\/strong> Nginx proxies to Gunicorn\/Uvicorn workers bound to localhost.<\/li>\n<\/ul>\n<p>For a detailed, real-world Node.js example behind Nginx, you can check our article on <a href=\"https:\/\/www.dchost.com\/blog\/en\/node-jsi-canliya-alirken-panik-yapma-pm2-systemd-nginx-ssl-ve-sifir-kesinti-deploy-nasil-kurulur\/\">hosting Node.js in production with PM2, Nginx, SSL, and zero\u2011downtime deploys<\/a>. The reverse proxy layer is the same pattern we\u2019ve been describing here.<\/p>\n<h2><span id=\"When_to_Evolve_Beyond_This_Simple_Architecture\">When to Evolve Beyond This Simple Architecture<\/span><\/h2>\n<p>The Nginx reverse proxy + simple load balancer pattern scales surprisingly far for small and medium projects, but there are clear signs when you should consider the next step:<\/p>\n<ul>\n<li><strong>Single front VPS becomes a bottleneck:<\/strong> CPU, RAM, or network usage on the Nginx front server stays high, even after tuning and microcaching.<\/li>\n<li><strong>Need for high availability:<\/strong> You cannot accept a single point of failure on the front proxy, and you want automatic failover between multiple front nodes.<\/li>\n<li><strong>Complex routing and security rules:<\/strong> Many apps, custom WAF rules, geo\u2011routing, or advanced rate limiting might require a more specialised setup.<\/li>\n<li><strong>Multi\u2011region or multi\u2011data center deployments:<\/strong> You want users automatically routed to the closest region with DNS\u2011level balancing.<\/li>\n<\/ul>\n<p>At that stage, you might introduce a second Nginx front with anycast or DNS failover, or look at dedicated load balancers or Kubernetes-based approaches. We also help customers move from classic VPS stacks into more advanced clusters when the time is right, without forcing premature complexity on small projects.<\/p>\n<h2><span id=\"Summary_and_How_dchostcom_Fits_In\">Summary and How dchost.com Fits In<\/span><\/h2>\n<p>A dedicated Nginx reverse proxy and simple load balancer in front of your application servers is one of those architectures that \u201cjust works\u201d for a long time. You centralise SSL, control routing in one place, gain the ability to add or remove backend servers, and open the door to microcaching and fine-grained security headers\u2014all while keeping the design understandable for a small team.<\/p>\n<p>The steps we covered\u2014preparing secure VPS servers, defining upstream blocks, configuring reverse proxy server blocks, adding basic load balancing, and adding small performance optimisations\u2014are exactly what we apply day-to-day for small businesses and SaaS projects hosted on dchost.com. You can start with a single VPS, split out Nginx to a front server when traffic warrants it, and then incrementally add more backends or caching as your needs grow.<\/p>\n<p>If you are planning a new project or want to refactor an existing \u201call-in-one\u201d server into a cleaner architecture, our team can help you choose the right combination of <strong>VPS, dedicated servers, or colocation<\/strong> in our data centers and design a practical Nginx-based stack that fits your budget and growth plans. When you are ready, reach out to us at dchost.com and we will be happy to translate this blueprint into a production-ready deployment tailored to your workload.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>For many small projects, the first deployment runs on a single VPS: web server, application, and database all on one machine. It works well at the start, but as traffic grows or you add more services (API, admin panel, background workers), things become harder to manage. SSL certificates are scattered, firewall rules get complex, and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3524,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26],"tags":[],"class_list":["post-3523","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-teknoloji"],"_links":{"self":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3523","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/comments?post=3523"}],"version-history":[{"count":0,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/posts\/3523\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media\/3524"}],"wp:attachment":[{"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/media?parent=3523"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/categories?post=3523"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dchost.com\/blog\/en\/wp-json\/wp\/v2\/tags?post=3523"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}