Distributed denial-of-service (DDoS) attacks are no longer a problem only for banks and global platforms. In capacity planning calls with our own customers at dchost.com, we routinely see small WordPress sites, local e‑commerce stores and SaaS side‑projects hit by automated floods of traffic. The goal is simple: exhaust your bandwidth, CPU, RAM or connection limits so real visitors cannot load your site. The good news is that you do not need an enterprise security budget to defend yourself. With a well‑designed mix of Cloudflare, smart rate limiting and sensible server tuning, small and medium websites can withstand a surprising amount of malicious traffic.
In this article, we’ll walk through a practical, layered DDoS protection strategy you can actually deploy. We’ll focus on three main pillars: using Cloudflare intelligently, designing effective rate limiting at the edge and on your origin, and hardening your server so it behaves predictably under stress. All examples and recommendations are written from the perspective of how we protect real customer workloads on shared hosting, VPS, dedicated servers and colocation at dchost.com.
İçindekiler
- 1 1. What DDoS Really Looks Like for Small and Medium Sites
- 2 2. A Layered DDoS Defense Model for Real‑World Sites
- 3 3. Using Cloudflare Effectively for DDoS Protection
- 4 4. Designing Smart Rate Limiting at Edge and Origin
- 5 5. Server-Side Hardening and Tuning Against DDoS
- 6 6. Monitoring, Detection and an Incident Playbook
- 7 7. Hosting Architecture Choices and How dchost.com Can Help
1. What DDoS Really Looks Like for Small and Medium Sites
1.1 Types of DDoS attacks you’re likely to see
DDoS is a broad term. In practice, small and medium websites usually face three main categories:
- Volumetric attacks: The attacker sends huge amounts of traffic (Gbps or more) to saturate your network connection or your provider’s upstream capacity.
- Protocol/transport attacks: SYN floods, UDP floods or malformed packets that consume resources in your TCP/IP stack, firewall or load balancer.
- Application‑layer attacks (Layer 7): Seemingly legitimate HTTP(S) requests to expensive endpoints (search, cart, login, API) designed to exhaust PHP, database and disk I/O.
Most smaller sites are hit by a mix of low‑to‑medium volumetric traffic plus very targeted Layer 7 requests. A few thousand requests per second to a slow PHP page can bring down an under‑tuned VPS as effectively as a giant network flood.
If you want a conceptual refresher on attack types, you can also read our dedicated article explaining what DDoS is and how it works at the network and application layers.
1.2 Constraints specific to small and medium websites
Smaller sites face distinct limitations compared to huge platforms:
- Limited bandwidth and connection limits on shared hosting or entry‑level VPS plans.
- Constrained CPU/RAM, especially if you run a CMS like WordPress, WooCommerce or a heavy PHP framework.
- Single-region hosting without globally distributed infrastructure.
- Minimal on‑call capacity: no 24/7 security team watching graphs.
Because of these constraints, your strategy cannot only be “add more hardware”. Instead, you need to filter and shape malicious traffic as far away from your origin as possible, then tune the origin to fail gracefully if something slips through.
2. A Layered DDoS Defense Model for Real‑World Sites
At dchost.com, we design DDoS protection as a stack of layers that complement each other:
- DNS and edge layer (Cloudflare): hides your origin IP, absorbs large floods, blocks obvious attacks and bots before traffic even touches your server.
- Edge rate limiting: limits how many requests per IP (or per token) can reach your origin for specific endpoints.
- Origin firewall and kernel: iptables/nftables/ufw, connection tracking and TCP tuning to handle surges without collapsing.
- Web server and application tuning: Nginx/Apache/LiteSpeed, PHP‑FPM and database settings that prevent a few heavy requests from starving everything else.
- Monitoring and incident playbook: metrics, logs and predefined actions so your response during an attack is calm and repeatable.
This layered approach is crucial. Cloudflare alone cannot protect a badly tuned origin from every Layer 7 pattern. Conversely, a perfectly tuned server can still be overwhelmed if you let an attacker throw millions of requests per second directly at it. You need both.
3. Using Cloudflare Effectively for DDoS Protection
Cloudflare is one of the most accessible DDoS mitigation tools for small and medium websites, because it sits in front of your existing hosting without any code changes. But configuration matters a lot: an un‑tuned Cloudflare setup can still leak large amounts of malicious traffic to your origin.
For a more detailed configuration walkthrough, you can also check our dedicated guide on Cloudflare security settings, WAF, rate limiting and bot protection for small business sites. Below we’ll focus on the DDoS‑specific pieces.
3.1 Put your site fully behind Cloudflare
- Use proxied DNS (orange cloud): Make sure A/AAAA records for your web host are proxied, not DNS‑only. Otherwise Cloudflare cannot filter traffic.
- Do not leak your origin IP: Avoid publishing your server IP in additional DNS records, test subdomains, or email banners. Once the origin IP is known, attackers can bypass Cloudflare entirely.
- Use Cloudflare-friendly DNS architecture: Our guide on choosing between Cloudflare DNS and hosting DNS explains how to structure nameservers and records safely.
3.2 Enable and tune Cloudflare WAF
The Web Application Firewall (WAF) is your first defense against Layer 7 attacks:
- Turn on Managed Rules for common attack categories (SQL injection, XSS, common CMS exploits).
- Enable CMS‑specific rulesets (e.g. WordPress, Magento) if you run those platforms.
- Add custom rules for sensitive endpoints like
/wp-login.php,/xmlrpc.php,/cart,/checkout,/api/, using conditions such as URI, request method and user agent.
If you are primarily fighting WordPress bots and credential stuffing, our article on using Cloudflare WAF rules and rate limiting specifically to stop WordPress bots walks through real rule examples.
3.3 Cloudflare rate limiting for expensive endpoints
Cloudflare’s rate limiting should be applied surgically, not globally. Focus on high‑cost actions:
- Login endpoints (
/wp-login.php,/user/login,/signin) - Search and filtering (
/search,?s=, filter parameters) - Checkout, cart and payment pages
- Public APIs (e.g.
/api/v1/)
Good starting points per IP (you can adjust later):
- Login pages: 5–10 requests per minute; block or challenge after that.
- Search endpoints: 20–60 requests per minute; throttle (rate‑limit) further bursts.
- APIs: depends heavily on your app, but often 60–120 requests per minute per IP is enough for human‑driven traffic.
When tuning these rules, watch Cloudflare analytics to spot false positives. Be especially careful with shared office IPs, VPNs and mobile carrier NATs where many legitimate users share one public IP.
3.4 Bot management and “challenge” modes
Cloudflare’s bot scoring is useful against low‑effort DDoS bots:
- Challenge high‑risk bots (JS challenge/captcha) instead of immediately blocking. Challenges are lighter than serving a full PHP page but still filter out unsophisticated tools.
- Raise security level temporarily during an active attack. This increases the likelihood that suspicious visitors are challenged.
- Use country‑based rules if your business is very local and you consistently see attacks from regions you never sell to.
3.5 Protect the origin with Authenticated Origin Pulls and mTLS
Another important step is ensuring only Cloudflare can speak to your origin:
- Restrict your server firewall to accept HTTP/HTTPS only from Cloudflare’s IP ranges.
- Use Authenticated Origin Pulls or mutual TLS (mTLS) so the origin verifies that incoming connections actually belong to Cloudflare.
We explained this model in detail in our article about protecting your origin with Cloudflare Authenticated Origin Pulls and mTLS. This is especially powerful against attackers who somehow discover your raw server IP.
4. Designing Smart Rate Limiting at Edge and Origin
Rate limiting is about fairness: preventing a single client (or small group of bots) from consuming your entire capacity. The trick is to apply limits as close to the attacker as possible while being gentle with legitimate bursty traffic.
4.1 Edge rate limiting vs origin rate limiting
You can (and usually should) combine:
- Edge rate limiting (Cloudflare): stops floods out on the internet, saves your bandwidth and keeps origin connection counts low.
- Origin rate limiting (web server/firewall/application): provides a second line of defense if malicious traffic gets through, and gives you more granular control using internal signals.
4.2 Web server level rate limiting (Nginx, Apache, LiteSpeed)
At the web server layer you can rate limit per IP, per URL or even per cookie or header. Common strategies:
- Connection limiting: cap the number of simultaneous connections per IP.
- Request rate limiting: restrict how many requests an IP can make per second/minute.
- Burst handling: allow short spikes beyond the limit, but queue or drop if they continue.
Examples of policies that work well for small sites:
- Maximum 10–20 concurrent connections per IP.
- Average 5–10 requests per second per IP on dynamic endpoints.
- More relaxed limits or no limits on static assets (CSS, JS, images) because they are cheap to serve and often aggressively cached.
If you are comfortable at the firewall level, you can implement advanced patterns using nftables. Our cookbook on nftables‑based rate limiting and IPv6 rules for VPS servers shows how to combine connection caps and rate filters directly in the kernel.
4.3 Application-level rate limiting
Some attacks are best handled inside the application itself:
- Login attempts per username or email, not just per IP.
- API tokens or user IDs with their own per‑minute limits.
- Per‑session constraints so one compromised session cannot hammer your backend indefinitely.
For APIs and microservices, we have a separate article on rate limiting strategies with Nginx, Cloudflare and Redis. Many of those ideas can be adapted to smaller monolithic web apps as well.
4.4 Avoiding false positives when limiting rates
The biggest fear with rate limiting is blocking real customers. Some practical safeguards:
- Start in log/monitor mode if your platform supports it, so you can see who would be blocked before enforcing.
- Use higher limits for HTML pages than for API calls, because browsers will naturally open parallel connections and request multiple assets.
- Whitelist known monitoring IPs (uptime checkers, payment gateways) so health checks do not trigger rules.
- Implement separate rules for authenticated users, who may legitimately make more requests than anonymous visitors.
5. Server-Side Hardening and Tuning Against DDoS
Even with Cloudflare in front and good rate limiting, some abnormal traffic will still reach your origin. Your operating system, firewall and web stack should be tuned so they degrade gracefully instead of collapsing.
5.1 Kernel and network stack tuning
Linux has many sysctl parameters that influence how it handles large numbers of connections, SYN packets and buffer usage. Key areas to review:
- Connection tracking limits: ensure
net.netfilter.nf_conntrack_maxis set high enough for expected traffic but not so high that you run out of RAM during a flood. - SYN flood protection: tune
net.ipv4.tcp_syncookiesand related settings to mitigate SYN floods without excessive resource use. - Backlog queues: parameters like
net.core.somaxconnandnet.ipv4.tcp_max_syn_backlogcontrol how many pending connections your server can hold. - Buffer sizes: adjust TCP buffer settings so long‑lived connections do not exhaust memory.
We have a detailed, real‑world walkthrough of these parameters in our article on Linux TCP tuning for high‑traffic WordPress and Laravel. Even if your site is smaller, applying the same principles at a smaller scale will help during bursts.
5.2 Firewall configuration and basic DDoS protections
Your firewall is the last network gate before the web server. On a VPS or dedicated server, you should at minimum:
- Allow only required ports (80, 443, SSH on a non‑standard port, plus any needed service ports).
- Limit new connection rates per IP at the firewall level for HTTP and HTTPS.
- Drop malformed packets and obvious scans (e.g. invalid TCP flag combinations).
- Consider Fail2ban or similar tools to temporarily ban IPs that trigger many errors or authentication failures.
If you prefer a checklist, our guides on VPS security hardening (including Fail2ban and SSH best practices) and firewall configuration with ufw, firewalld and iptables give you step‑by‑step starting points.
5.3 Web server and PHP-FPM tuning
DDoS attackers love to exploit weak spots in your PHP and database stack. Some tuning basics:
- Limit PHP‑FPM workers (
pm.max_children) so they fit in RAM; otherwise, a spike will cause swapping and the entire server slows down. - Set sane timeouts for fastcgi/proxy connections. You do not want stuck requests tying up resources for minutes.
- Enable caching (FastCGI cache, LiteSpeed cache, plugin‑level cache) so many requests are served without hitting PHP at all.
- Graceful resource limits (max execution time, memory limits) to prevent a single heavy request from consuming everything.
We have covered PHP‑FPM and OPcache tuning in several articles, for example our deep dives on PHP‑FPM settings for WordPress and WooCommerce and on OPcache configuration. Applying those same optimizations makes DDoS traffic much less effective at exhausting your CPU and RAM.
5.4 Logging and retention during attacks
Heavy DDoS traffic can also fill your disk via logs. Basic precautions:
- Use logrotate aggressively with compression and size‑based rotation for access and error logs.
- Consider sampling logs during extreme attacks if disk I/O becomes a bottleneck.
- Forward critical logs (e.g. firewall drops, HTTP 5xx) to external log storage if you need long‑term retention.
This ensures your server does not die with a “no space left on device” error in the middle of an attack.
6. Monitoring, Detection and an Incident Playbook
Protection is only half the story; you also need to notice attacks quickly and know what to do when they happen. Even small teams can set up lightweight monitoring that makes a big difference.
6.1 Metrics to watch
At minimum, keep an eye on:
- Requests per second at the edge (Cloudflare analytics) and at the origin (web server metrics).
- CPU, RAM and load average on the server.
- Network bandwidth in/out on the interface.
- HTTP status code distribution: spikes in 5xx or 429 often accompany attacks or misconfigured rate limits.
Tools like Netdata, Prometheus, Grafana or even simpler dashboards can help. We have a step‑by‑step guide on setting up VPS monitoring and alerts with Prometheus, Grafana and Uptime Kuma if you want a repeatable setup.
6.2 Basic DDoS incident playbook
During an attack, you do not want to improvise. A simple, written checklist already helps a lot:
- Confirm it is a DDoS: check Cloudflare and origin metrics for sudden RPS/bandwidth spikes or abnormal patterns.
- Identify target endpoints: Is traffic hitting
/wp-login.php,/search,/api/, or random URLs? - Tighten Cloudflare rules: raise security level, enable more aggressive WAF rules, add or strengthen rate limits and country/IP ranges if appropriate.
- Scale down non‑essential workloads on the same server (background jobs, heavy cron tasks) to free CPU and RAM.
- Adjust origin rate limits and timeouts to protect the database and PHP‑FPM.
- Log key details (time, IP ranges, user agents, URIs) for later analysis and longer‑term rule improvements.
After the incident, review what worked, what caused false positives and what can be automated for next time. Over a few iterations, your incident playbook becomes much smoother.
7. Hosting Architecture Choices and How dchost.com Can Help
No DDoS strategy is complete without a realistic look at your hosting architecture and capacity. You do not need a giant cluster for every project, but some patterns significantly improve resilience:
- Right‑sized VPS or dedicated server: enough CPU/RAM to handle your normal peaks with margin, so small attacks do not instantly push you to 100% usage.
- Separation of concerns: for higher‑traffic sites, consider splitting web and database servers or at least isolating background workers.
- Global DNS/CDN layer in front of your origin (e.g. Cloudflare) so large volumetric attacks are absorbed away from your server.
- Backup and disaster recovery: regular backups, tested restores and clear RPO/RTO targets so even in the worst case, you can rebuild.
At dchost.com, we support these patterns across shared hosting, VPS, dedicated servers and colocation. Our team can help you choose a plan with sufficient bandwidth and resources, configure Cloudflare DNS/proxy on top, and apply the server‑side tuning we use internally for high‑traffic customers.
If you already host with us and want to harden an existing site against DDoS, a practical sequence is:
- Enable Cloudflare in front of your domain and proxy all public web records.
- Apply our Cloudflare WAF and rate limiting recommendations from the articles linked above.
- Implement kernel, firewall and web server tuning based on our Linux TCP and firewall guides.
- Set up basic monitoring and an incident checklist so you are not surprised by the next traffic spike.
You do not have to do everything at once. Each step you implement – edge protection, smarter rate limits, better server tuning – makes DDoS traffic less effective and keeps your real customers online. If you need help choosing the right hosting plan or reviewing your current stack from a DDoS resilience perspective, our team at dchost.com is ready to work through it with you.
