Technology

The Calm Guide to Linux TCP Tuning for High‑Traffic WordPress & Laravel (Without the Drama)

So there I was, staring at a graph that looked like a heartbeat monitor during a sprint. A client had just pushed a flash sale on their WooCommerce store, and traffic surged like a wave. Pages started to crawl. Checkout lagged. Everyone blamed the database. But the database was fine. Nginx was fine. PHP‑FPM was breathing hard, sure, but not out of air. The quiet culprit? The Linux network stack—the pipes carrying all the tiny packets that make the web dance. The defaults weren’t built for a stampede.

Ever had that moment when everything “looks normal” but your site still feels slow under load? That’s often the TCP/IP layer raising a hand and whispering, “Hey, I need bigger buffers, a wider door, and a line system that actually moves.” In this post, I’ll walk you through practical Linux TCP tuning for high‑traffic WordPress and Laravel apps: sysctl settings that actually matter, sensible UDP buffer sizing (especially if you’re playing with HTTP/3), and the calm way to defend against SYN floods without kneecapping real users. All in a friendly, human way—no wall of jargon, no magic numbers with zero context.

Why Your WordPress/Laravel Site Feels Slow When The Network Stack Can’t Breathe

Here’s the thing about PHP apps: they’re usually blamed last for network hiccups and first for everything else. But WordPress and Laravel don’t live in a vacuum; they sit behind Nginx or Apache, often behind a load balancer, and every single dynamic request is a little conversation—TCP handshake, TLS handshake, request, response, maybe a few keep‑alive hops, and then the connection is parked or retired. When your kernel’s defaults are tuned for a quiet Sunday, a Friday night campaign feels like stuffing a stadium through a single turnstile.

Think of your server like a popular coffee shop. The baristas (PHP‑FPM/your app) can crank out drinks fast, but if the front door is too narrow (accept queue too small), the hallway is clogged (backlog too short), and the cups are tiny (socket buffers too small), the line spills into the street and everyone blames the coffee. I’ve seen this dozens of times: CPU looks fine, RAM fine, even disk okay. But the TCP queues are screaming, SYN packets are retried, and users start hitting refresh like it’s a game.

In my experience, the biggest wins come from widening that front door, giving each customer a bigger cup, and teaching staff to move with the flow. In kernel terms: adjust somaxconn and backlog, increase buffer ceilings, pick a modern congestion control, and give UDP some love if you’ve added HTTP/3 or chatty services. And when traffic turns hostile—accidental thundering herds or real SYN floods—you need controls that protect capacity without tripping real buyers.

A Calm Rollout Plan: Measure, Change, Verify, Repeat

Before we touch sysctl, I always take a snapshot of reality. It doesn’t have to be fancy. I’ll run a quick set of commands while the app is under normal traffic (and again under load if I can simulate safely):

ss -s
ss -ltnp | head -n 20
ss -n state syn-recv '( sport = :80 or sport = :443 )'
netstat -s | egrep 'listen|SYN|retransmit|pruned'
cat /proc/sys/net/core/somaxconn
cat /proc/sys/net/core/netdev_max_backlog
cat /proc/sys/net/ipv4/tcp_max_syn_backlog

This quick peek tells me if the listen queues are overflowing, whether SYN‑RECV is piling up, and whether the server is shedding packets before they even reach Nginx. I also like to keep an eye on dmesg during tests—if the kernel’s complaining about backlog drops or memory pressure, it’s rarely shy.

Now for the “I learned this the hard way” part: never paste a giant sysctl block into production and call it a day. Make incremental changes, apply them with sysctl -p or by dropping a file into /etc/sysctl.d/, and verify. Propagate slowly across a cluster. And keep a rollback file handy. If you’ve got automation, even better—I often bake these settings into first‑boot playbooks. If you like clean, repeatable setups, I’ve shared how I scaffold fresh VPSes with cloud‑init + Ansible for users, security, and services on first boot. It’s a great home for sysctl too.

The Core sysctl Moves That Make High‑Traffic HTTP Feel Easy

Let’s talk about the heartbeat of it all. The goal is simple: accept connections quickly, give each socket enough room to breathe, and keep the kernel from tripping over its own shoelaces under bursty traffic. These are the settings I reach for most often, with a few notes on why they matter.

Front door wider: listen backlog and accept queues

Two settings almost always help right away: net.core.somaxconn and net.ipv4.tcp_max_syn_backlog. The first caps the maximum length of the socket listen queue; the second controls how many connections in SYN‑RECV can be queued before the kernel starts dropping. When traffic spikes, a bigger queue buys your app time to accept connections. Pair this with your app server’s own backlog settings. For Nginx, the listen directive has a backlog parameter you can tweak; their docs on the listen directive and backlog are worth a quick skim.

Buffers that fit modern bandwidth

You’ll see four important knobs: net.core.rmem_max, net.core.wmem_max, and per‑protocol triplets net.ipv4.tcp_rmem and net.ipv4.tcp_wmem. The triplets set minimum, default, and maximum auto‑tuned buffer sizes. Bigger ceilings help long‑fat pipes (high bandwidth, higher latency) and bursty workloads under TLS. Don’t set them to cartoonish values, but do lift them beyond decade‑old defaults.

Modern congestion control

CUBIC is a solid default. BBR can be a real win for web delivery by keeping queues shallow while pushing throughput. If your kernel supports it, try BBR in a controlled rollout. The official BBR repository has background and links, but what matters in practice is watching your retransmits, tail latency, and 95th percentile request times before and after.

Graceful timewait and port range

Under load testing, you’ll often run out of ephemeral ports from the client side, but servers with reverse proxies can also benefit from a healthy ip_local_port_range. I usually open it up. I also set a reasonable tcp_fin_timeout so closed connections don’t linger forever. tcp_tw_reuse can help on boxes that initiate lots of outbound connections, but don’t bring back the ancient tcp_tw_recycle—it’s gone for good reason.

Keepalives and probing

Long‑lived connections are great for HTTP/2 and gRPC. Lowering tcp_keepalive_time and enabling tcp_mtu_probing can reduce stalls when paths are weird. For most web stacks, I leave SACK and timestamps on.

A sensible starting block

Here’s a sample sysctl file I’ve used as a base on busy web nodes. It’s not a magic recipe—tune to your capacity, kernel version, and traffic pattern—but it’s a friendly place to start.

# /etc/sysctl.d/99-web-tuning.conf

# 1) Bigger door for bursts
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 8192
net.core.netdev_max_backlog = 16384

# 2) Buffer ceilings (TCP)
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_rmem = 4096 1048576 8388608
net.ipv4.tcp_wmem = 4096 1048576 8388608

# 3) Keep the flow modern
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# 4) Handshake sanity and timeouts
net.ipv4.tcp_synack_retries = 4
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5

# 5) Ephemeral ports & TIME-WAIT behavior
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1

# 6) Don’t break modern TCP features
net.ipv4.tcp_sack = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_mtu_probing = 1

# 7) SYN flood baseline defense (more below)
net.ipv4.tcp_syncookies = 1

Apply with:

sudo sysctl --system
# or
sudo sysctl -p /etc/sysctl.d/99-web-tuning.conf

Then verify:

ss -s
cat /proc/sys/net/ipv4/tcp_congestion_control
cat /proc/sys/net/core/somaxconn

And don’t forget your app layer. For Nginx, bump the backlog on your HTTPS listeners and enable reuseport on multi‑core boxes. On the load balancer layer, I’ve shared ways to keep traffic flowing cleanly in my zero‑downtime HAProxy guide for L4/7 load balancing. It ties in beautifully with these kernel settings.

UDP Buffers That Don’t Panic: DNS, Logs, and HTTP/3/QUIC

Once upon a time, I barely cared about UDP buffers on web nodes. DNS queries were tiny, syslog was tame, and the heavy lifting was TCP. Then QUIC/HTTP/3 rolled in, CDNs started speaking UDP in earnest, and suddenly that part of the stack mattered again. If your edge or reverse proxy handles HTTP/3, or you’re pushing logs/metrics over UDP, you want to make sure those packets aren’t fighting for crumbs.

UDP is not TCP. There’s no handshake, no retransmit logic in the kernel. That means buffers are your shock absorbers. If they’re too small, bursts lead to drops—fast. The good news is the fixes are straightforward: raise the global rmem/wmem ceilings and set sensible UDP‑specific minimums. Paired with a reasonable netdev_max_backlog, this helps your NIC and kernel move bursting flows to user space without losing their lunch.

# UDP buffer tuning (add alongside TCP block)
net.core.rmem_max = 134217728   # 128M ceiling for receive
net.core.wmem_max = 134217728   # 128M ceiling for send

# System-wide defaults (not too big; apps can request more)
net.core.rmem_default = 1048576  # 1M
net.core.wmem_default = 1048576  # 1M

# UDP specific mins and memory pressure triplets
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
# udp_mem: pages; kernel scales per 4K page on most distros
# Roughly: low pressure, pressure, and hard limit
net.ipv4.udp_mem = 98304 262144 393216

I keep defaults conservative and let applications (like a QUIC‑enabled proxy) request larger buffers with SO_RCVBUF and SO_SNDBUF. You can monitor UDP drops with:

netstat -su | egrep 'packet receive errors|receive buffer errors|send buffer errors'
ss -u -a | head -n 20

If you’re experimenting with HTTP/3 on Nginx, HAProxy, or Caddy, track UDP receive errors during traffic spikes. A small bump in defaults can make a big difference without wasting RAM. And don’t forget that DNS and NTP live here too—if your instance is both resolver and web node, give those daemons a little headroom. For deeper dives into how to keep persistent connections happy at the edge, I shared some timeout and keep‑alive notes in my guide to WebSockets and gRPC behind Cloudflare—the mindset is similar for QUIC.

SYN Flood Defense That Doesn’t Punish Real Users

A SYN flood is that annoying crowd that pretends to line up for coffee but never actually orders. Your server spends time and memory tracking the half‑open handshakes until it can’t accept real customers. The trick is to make these fakes as cheap as possible while keeping legitimate users moving.

Start with kernel defenses

Turn on syncookies, increase the max SYN backlog, and keep retries sane. It’s not heroic, but it works surprisingly well during short spikes and accidental thundering herds.

net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_synack_retries = 4
net.core.somaxconn = 65535

Then make sure your app’s backlog isn’t the weak link. On Nginx, raise the listener backlog and consider enabling reuseport on multi‑core machines so each worker has its own accept queue. The Nginx docs on the listen directive cover both backlog and reuseport parameters.

SYNPROXY when things get serious

When you’re under sustained SYN flood from spoofed addresses, or you need to protect a thin app tier, a kernel‑level SYNPROXY can be a lifesaver. The idea is simple: your firewall completes the SYN/SYN‑ACK/ACK handshake on behalf of your app. Only once the client proves it’s real does the proxy pass the connection upstream. You can do this with iptables or nftables. The nftables docs have a good primer on SYN proxy.

# Example nftables snippet (simplified; adjust to your setup)

table inet filter {
  chain input {
    type filter hook input priority 0;
    ct state established,related accept
    iif lo accept

    tcp flags syn tcp option maxseg size 0 counter drop

    # Protect ports 80 and 443 with synproxy
    tcp dport {80, 443} ct state new,untracked syn proxy mss 1460 wscale 7 sack-perm timestamp
    tcp dport {80, 443} ct state invalid drop

    # Allow the validated handshake to app
    tcp dport {80, 443} accept
  }
}

You can also build this with iptables and the SYNPROXY target. The key is placement and not breaking established flows. Test on a canary, watch your error rates, and keep an eye on CPU—SYNPROXY is efficient, but it’s not free.

If you’re fronted by a load balancer or DDoS‑capable CDN, let them take the first punch. I’ve had great results pairing kernel hardening with a smart L4/L7 tier; for tips on balancing, health checks, and graceful restarts, here’s how I run zero‑downtime HAProxy in front of busy PHP stacks.

App‑Layer Settings That Play Nice With the Kernel

Kernel tuning makes room. Your app needs to use it. Here’s how I connect the dots for WordPress and Laravel without turning it into a 500‑line checklist.

Nginx: the great traffic translator

I set worker_processes auto; so it matches CPU cores and pair it with worker_connections high enough to cover peak concurrency. On busy sites, I raise the listen … backlog=65535 reuseport; on ports 80/443 to match kernel somaxconn. For TLS handshakes, session reuse and HTTP/2 help a lot—and if you’re curious about balancing speed and compatibility on certificates, I’ve shared an approach for serving dual ECDSA + RSA certificates that keeps old clients happy without slowing down the new ones.

Keep‑alive is a friend. I like a keepalive_requests limit that prevents one connection from living forever, with a keep‑alive timeout that fits your CDN and app patterns. For HTTP/3, I monitor UDP errors while rolling out and bump buffers in small steps.

PHP‑FPM: don’t starve the kitchen

Most of the time, I run pm = dynamic or pm = static with counts based on CPU and memory, and I make sure the listen.backlog in the pool config isn’t the bottleneck. If you’ve got a reverse proxy tier, give FPM a backlog that matches Nginx and kernel expectations, and make sure your rlimit_files and system nofile ulimit are high enough. Nothing is sadder than a server with room to run that can’t open more sockets.

Database: use a pooler and stop playing whack‑a‑mole

WordPress and Laravel apps love to open many short‑lived connections. On MySQL, connection storms can be brutal under flash sales. Using a pooler or proxy that keeps backend connections stable makes a world of difference. I wrote down my favorite patterns for WooCommerce and Laravel in ProxySQL with read/write split and pooling. Kernel tuning keeps the highway clear, but the pooler makes sure you don’t park a truck in every lane.

Load balancer tier: health checks that matter

If you’re running your own L4/L7 tier, keep health checks lightweight but meaningful, and make failover polite. The right checks reduce flapping and save you from false positives when the kernel is working hard. My playbook for HAProxy zero‑downtime upgrades and sticky sessions pairs nicely with the TCP settings here.

Real‑World Debug Stories: What I Watch Under Load

On a busy WooCommerce sale, my checklist is simple and fast. If checkout feels sticky, I look at SYN‑RECV counts and listen queue overflows. If those climb, I bump backlog and confirm Nginx is allowed to use it. Then I watch retransmits. If they spike, I’ll test BBR on one node, compare tail latencies, and watch error budgets for a few hours.

If HTTP/3 is enabled and errors creep up only when traffic goes big, I inspect UDP receive errors and server CPU in softirq. Sometimes the fix is as simple as lifting net.core.rmem_max from 8M to 32M, raising rmem_default a notch, and nudging net.core.netdev_max_backlog. Other times, moving QUIC to a different node helps spread the interrupt load.

And when traffic smells malicious? I turn on verbose logging for a few minutes, sample packet captures, and check the geographic fingerprint. If it’s a classic spoofed SYN flood, I’ll enable SYNPROXY on the edge and relax. It’s oddly satisfying to watch a CPU graph go from panic to quiet in seconds.

A Thought on Certificates, Ports, and the “Everything Is Fine” Lie

Once, the team kept insisting, “Everything is fine, we just need more PHP workers.” The truth? TLS handshakes were piling up and the kernel was politely dropping new connections when the listen queue overflowed. Users hammered refresh, creating even more SYNs. We increased somaxconn, matched Nginx backlog, enabled session resumption, and moved to a faster cert combo. Request latency fell without changing a single line of PHP. If you want to squeeze every millisecond out of TLS without breaking old devices, take a look at that piece on serving dual ECDSA and RSA certificates. It’s one of those underappreciated tweaks that makes your users feel the difference.

Automation and Safe Defaults: Make It Boring

The best compliment for a production network stack is “boring.” I aim for configurations that survive Friday night traffic and Sunday maintenance windows without surprises. The way to get there is to capture your tuned sysctl, app backlogs, and ulimits in code. I like to bake them into first‑boot routines and version them right next to app config. If you want a comfortable starting point, I shared how I standardize a new host with cloud‑init + Ansible on first boot. It keeps drift low and rollbacks easy.

For teams deploying multi‑node stacks, putting the load balancer in a controlled pipeline helps you scale changes gradually and avoid brownouts. It pairs nicely with the kernel tuning we’ve covered, and honestly, it saves arguments during incident calls.

Extra Knobs (Use Gently): Fast Open, GRO, and NIC Queues

There are a few more dials you can explore as your traffic grows.

TCP Fast Open can shave a bit off the first request if your reverse proxy supports it. On Linux, enable server/client bits via net.ipv4.tcp_fastopen and turn it on in your edge proxy. Measure carefully; gains vary, and middleboxes can get twitchy.

# Enable server and client Fast Open
net.ipv4.tcp_fastopen = 3

GRO/LRO and IRQ affinity are more about NIC and driver tuning than sysctl, but I’ve seen them matter at very high pps. Pinning interrupts, spreading queues, and making sure RSS is doing its job keep softirq from spiking one core while others nap. If you reach that stage, you’re well into “fun with drivers” territory—do it on a staging node first.

Rollback, Observability, and The Human Bits

Big lesson from the trenches: a good rollback plan is as important as a good tuning plan. Keep a copy of your old sysctl file, push changes to a single node, and roll forward only after watching request latency, error rates, and queue lengths for a while. On peak traffic days, resist the temptation to “just tweak one more thing.” It’s easy to confuse correlation with causation when graphs are dancing.

Make observability your quiet partner. Even simple metrics like accept queue size, SYN‑RECV counts, retransmits, and UDP drops tell a clear story. When something looks off, take a breath, grab a tiny packet capture, and validate your assumptions. The network stack is honest if you ask the right questions.

Wrap‑Up: A Calm, Fast, and Resilient Stack

Alright—let’s land the plane. High‑traffic WordPress and Laravel sites don’t have to feel fragile. With a few focused kernel tweaks, you widen the front door, give sockets room to breathe, and make handshakes cheaper. On the UDP side, you cushion bursts so QUIC, DNS, and logs don’t spill packets at the worst moments. And when the crowd turns rowdy, syncookies and SYNPROXY stand quietly at the door, letting the real customers in and showing the fakes out.

Start small: snapshot your current state, apply a tight sysctl block, match app backlogs, and verify. If you’re curious, test BBR on one node and watch tail latencies. If you’re moving into HTTP/3 country, lift UDP buffers sensibly and keep an eye on receive errors. And please—put it all into automation so the next server gets the same calm defaults. If your stack includes a pooler, a smart load balancer, and sane TLS, you’ll feel the difference on a real sale day. Hope this was helpful! If you try these settings, I’d love to hear how it goes. See you in the next post.

Frequently Asked Questions

Great question! CUBIC is a solid default and works well for most web traffic. BBR can reduce queueing and improve tail latency on busy or high‑latency paths. I like to enable BBR on one node, compare error rates and 95th percentile latency during real traffic, and then decide. If you don’t see a clear win, staying on CUBIC is perfectly fine.

Here’s the deal: set them high enough that the listen queue doesn’t overflow during spikes, but pair them with your app’s own backlog. A common pattern is somaxconn at 65535 and tcp_max_syn_backlog around 8192, with Nginx listeners using backlog=65535. Watch SYN‑RECV counts and accept queue overflows under load; if they persist, increase gradually and verify.

SYNPROXY is designed to filter spoofed handshakes and pass real clients through after they complete the initial exchange. When configured correctly at the edge, legit users won’t notice. There’s a small CPU cost, so I turn it on during attacks or keep it on only at the perimeter. Test on a canary and watch handshake error rates to be safe.