Technology

How to Load Test Your Hosting Before Traffic Spikes with k6, JMeter and Locust

Teams usually start thinking about performance when a big launch, campaign or seasonal peak appears on the roadmap. At that point, the key question is simple: can our current hosting handle the expected traffic, and what breaks first if it cannot? The most reliable way to answer this is to run structured load tests against your site or API before the traffic spike happens. In this article, we will walk through a practical process we use at dchost.com to test real-world workloads using three popular open-source tools: k6, Apache JMeter and Locust.

We will focus on how to design realistic scenarios, how to prepare your hosting environment, and how to interpret the results so you can make concrete decisions: scale up your VPS, introduce a cache layer, change PHP-FPM settings, or move to a dedicated server. Whether you run WordPress, Laravel, Node.js or a custom stack, the same principles apply. By the end, you will have a reusable blueprint you can apply on any dchost.com hosting plan, from shared to VPS, dedicated or colocation.

Why Load Testing Your Hosting Before Traffic Spikes Matters

Load testing is not about achieving a perfect benchmark score; it is about reducing uncertainty. Before a campaign goes live, you want to know:

  • How many concurrent users or requests per second your hosting can serve with acceptable response times
  • Where the first bottleneck appears: CPU, RAM, disk I/O, database, PHP workers, or external API calls
  • How your application behaves when limits are reached: graceful degradation vs. 500 errors and timeouts

We covered capacity planning from a hosting perspective in detail in our hosting scaling checklist for traffic spikes and big campaigns. Load testing is the hands-on counterpart of that planning work: instead of only estimating, you simulate the coming traffic as closely as possible.

It helps to distinguish a few test types:

  • Load test: Gradually increase traffic up to an expected peak (for example, 200 logged-in users) and observe metrics.
  • Stress test: Push the system beyond its expected peak to see how it fails and how it recovers.
  • Soak (endurance) test: Keep a realistic constant load for hours to uncover memory leaks, connection leaks or slow bloat.

For most websites and SaaS apps preparing for a spike, you will primarily run a load test plus a short stress test, and optionally a soak test if you suspect memory issues.

Planning a Realistic Load Test Scenario

The quality of your load test depends more on your scenario design than on the tool. Before opening k6, JMeter or Locust, answer these questions.

1. Define clear performance goals

Start with a small set of measurable targets such as:

  • Target concurrency: e.g. 150 concurrent users on the store during campaign peak
  • Latency budget: e.g. 95% of requests < 800 ms; 99% < 1.5 s
  • Error budget: e.g. HTTP 5xx < 0.5% and timeouts < 1%

Align these goals with your business and SEO expectations. For example, our article on how Core Web Vitals relate to hosting explains why keeping server response times under control is critical for LCP and ranking.

2. Estimate traffic and concurrency

If you do not have historical data, you can still create reasonable estimates. We recommend using the approach described in our guide to estimating traffic and bandwidth on shared hosting and VPS. In simplified form:

  • Estimate total daily visitors during the spike (for example, campaign forecast)
  • Identify the busiest hour (often 15–25% of daily visits)
  • Convert that into requests per second (RPS) and concurrent sessions using your average pageviews per session

For instance, if you expect 10,000 visitors on the busiest day, 25% of them in the busiest hour (2,500), and about 4 pageviews per visitor, you get roughly 10,000 pageviews in that hour, which is about 2.8 pageviews per second. Add API requests, static assets and cache misses to build a more complete picture.

3. Model user behaviour, not just single URLs

Load tests that hit a single URL (like the home page) are easy to run but often misleading. Real users:

  • Navigate between multiple pages
  • Perform actions like search, add-to-cart, login, or checkout
  • Sometimes make invalid requests or trigger edge cases

Try to model a few key user journeys with approximate probabilities, such as:

  • 50% browse catalog only
  • 30% search + view product detail
  • 15% add to cart but do not complete checkout
  • 5% complete checkout

Tools like JMeter and Locust are particularly good at modelling such flows; k6 can also express them through JavaScript functions and scenarios.

4. Choose the right environment

Whenever possible, run heavy tests against a staging or pre-production environment that is:

  • On the same hosting type and similar specs as production (same dchost.com VPS size, same PHP/MySQL versions)
  • Using a cloned database (with anonymised user data if needed)
  • Behind the same proxies, WAF, CDN and caching rules as production

If you must test against production, do it in off-peak hours, with lower intensity, and communicate with stakeholders in advance. Also check your CDN and caching strategy first, so you do not unintentionally DOS your own origin with fully uncached traffic.

Preparing Your Hosting and Observability Stack

Load test results are only useful if you can see what is happening inside the server. Before starting k6, JMeter or Locust, prepare your monitoring and logging.

1. Baseline monitoring on the VPS or server

At a minimum, you should watch:

  • CPU usage and steal time
  • RAM usage and swap
  • Disk I/O wait and throughput
  • Network throughput and errors
  • Database CPU, slow queries and locks

We showed how to set this up in detail in our guide to monitoring VPS resources with htop, iotop, Netdata and Prometheus. Even a quick Netdata dashboard or a Prometheus + Grafana setup will make your load tests much more informative.

2. Application and web server logs

Enable and tail:

  • Web server access/error logs (Nginx or Apache)
  • Application logs (Laravel, WordPress debug logs, Node.js logs, etc.)
  • Database slow query logs (MySQL, MariaDB or PostgreSQL)

During the test, watch for spikes in 4xx/5xx, upstream timeouts, or slow queries. Our article on reading web server logs to diagnose 4xx–5xx errors is a useful companion while you analyse your test runs.

3. Align infrastructure with realistic settings

Before you test, configure your stack to match how you intend to run in production:

  • Set reasonable pm.max_children and related PHP-FPM settings (we have a separate guide for PHP-FPM tuning for WordPress and WooCommerce)
  • Ensure your object cache (Redis/Memcached) is enabled if you plan to use it
  • Enable CDN or reverse proxy caching if it will be active in the real event
  • Disable heavy background jobs or backups that might distort test results

The goal is not to cheat, but to reflect the realistic production architecture you will rely on during the spike.

Load Testing with k6: Modern, Scriptable and CI-Friendly

k6 is a modern load testing tool that uses JavaScript for scripting and is very comfortable for developers. It is ideal for HTTP APIs, microservices and web apps where you want tests that fit nicely into your CI/CD pipeline.

1. Installing k6

On a Linux-based test runner (for example, a dedicated VPS used as a load generator), you can usually install k6 from your package manager or via a binary download. Refer to the official documentation for your distribution. We recommend running k6 from a separate server, not from the same VPS that hosts your site, so you do not mix load generation with resource usage.

2. Writing a basic k6 script

Here is a minimal k6 script that hits your home page and checks that it returns HTTP 200 status:

import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
  vus: 50,           // virtual users
  duration: '2m',    // test duration
  thresholds: {
    http_req_duration: ['p(95)<800'],   // 95% of requests < 800 ms
    http_req_failed: ['rate<0.01'],     // < 1% failed requests
  },
};

export default function () {
  let res = http.get('https://example.yourdomain.com/');
  check(res, {
    'status is 200': (r) => r.status === 200,
  });
  sleep(1);
}

This covers a simple smoke test. To simulate a ramp-up closer to a real campaign, you can use the stages option:

export let options = {
  stages: [
    { duration: '2m', target: 50 },   // ramp up to 50 VUs
    { duration: '5m', target: 50 },   // stay at 50 VUs
    { duration: '2m', target: 100 },  // ramp up to 100 VUs
    { duration: '5m', target: 100 },  // stay at 100 VUs
    { duration: '2m', target: 0 },    // ramp down
  ],
};

3. Modelling user journeys in k6

You can build more realistic flows with functions and randomisation:

export default function () {
  // Visit home page
  http.get('https://example.yourdomain.com/');
  sleep(1);

  // Browse a category
  http.get('https://example.yourdomain.com/category/shoes');
  sleep(1);

  // View a product
  http.get('https://example.yourdomain.com/product/sneaker-123');
  sleep(1);
}

You can also parameterise URLs, logins and payloads from CSV/JSON files and add checks for specific HTML elements or JSON fields. k6’s thresholds feature is particularly useful for enforcing your performance budgets during CI: a merge can fail if latency exceeds your targets.

Load Testing with Apache JMeter: GUI and Protocol Flexibility

Apache JMeter is one of the oldest and most flexible load testing tools. It supports many protocols in addition to HTTP: JDBC, FTP, SMTP and more. For hosting-related scenarios, it is especially useful when you need:

  • Complex multi-step web flows with logins, cookies and CSRF tokens
  • Testing backend services like database queries or message queues (with care)
  • Reusable test plans maintained by QA engineers through a GUI

1. Creating a basic HTTP test plan

The typical JMeter workflow for a website or API:

  1. Create a Test Plan and add a Thread Group (this defines concurrent users and ramp-up time).
  2. Add one or more HTTP Request samplers for each step in your user journey.
  3. Add Config Elements such as HTTP Header Manager (for user agents, auth tokens) and Cookie Manager.
  4. Add Listeners like Summary Report, Aggregate Report and Graph Results to collect metrics.
  5. Set the number of threads (users), ramp-up period and loop count.

JMeter’s GUI is helpful for designing the scenario. For actual high-load runs, save the plan and run it in non-GUI (CLI) mode from a separate VPS to avoid overloading your workstation.

2. Correlation and parameterisation

Realistic load tests often need to capture tokens from responses (like CSRF or session IDs) and reuse them in subsequent requests. JMeter supports this through Post-Processors (like Regular Expression Extractor or JSON Extractor) and Variables. For example, you can:

  • Send a login request and extract a token field from JSON
  • Store it in a JMeter variable
  • Use that variable in the Authorization header for the next requests

This makes JMeter powerful for simulating authenticated user flows, admin panels or API clients.

Load Testing with Locust: Pythonic User Behaviour

Locust is a Python-based load testing framework that describes user behaviour as Python classes. Many teams like it because it feels like writing regular application code instead of configuration-heavy test plans.

1. Basic Locustfile example

A minimal locustfile.py for a browsing scenario could look like this:

from locust import HttpUser, task, between

class WebsiteUser(HttpUser):
    wait_time = between(1, 3)

    @task(3)
    def browse_home(self):
        self.client.get('/')

    @task(2)
    def browse_category(self):
        self.client.get('/category/shoes')

    @task(1)
    def view_product(self):
        self.client.get('/product/sneaker-123')

Here, tasks have different weights (3:2:1), modelling probabilities. The wait_time function defines how long a simulated user waits between actions, which affects concurrency and realism.

2. Running Locust

After installing Locust (typically with pip install locust), you run:

locust -f locustfile.py

Locust starts a web UI (by default on port 8089) where you can configure:

  • Number of users to simulate
  • Spawn rate (users per second)
  • Target host (for example, your staging URL)

You can also run Locust in headless mode for automated runs and in a distributed mode with multiple worker processes across several VPS instances when you need to generate very high loads.

Interpreting Results and Turning Them Into Hosting Actions

After running your tests, you end up with a lot of numbers: response times, percentiles, error rates, throughput. The key is to connect these metrics with what you saw on the server side and then adjust your hosting or configuration accordingly.

1. Key metrics from k6, JMeter and Locust

Across all three tools, watch for:

  • Throughput: requests per second (RPS) or transactions per second
  • Latency: average, median, p90, p95, p99 response times
  • Error rate: HTTP 4xx/5xx, timeouts, connection errors
  • Concurrency: number of active users or requests in flight

Overlay these with your server metrics (CPU, RAM, I/O, DB load). For example:

  • If CPU hits 100% while RPS stalls and latency spikes, you are CPU-bound.
  • If CPU is moderate but I/O wait is high, especially on HDD or slow SSD, the disk is the bottleneck.
  • If MySQL slow queries appear, your database or queries are the limit.

2. Common bottlenecks and fixes on hosting

From real-world projects on dchost.com infrastructure, here are typical patterns we see and how they translate into actions:

  • PHP-FPM worker exhaustion: Many pending PHP requests, long queues, high CPU.
    • Action: Adjust pm.max_children, pm.start_servers and related values; or upgrade to a VPS with more vCPU.
  • Database saturation: High CPU in MySQL/PostgreSQL, slow queries.
  • Actions:
    • Optimise indices and queries (our guides on MySQL indexing for WooCommerce and on replication can help).
    • Move the database to a separate VPS or a higher-tier plan if CPU is consistently saturated.
  • Disk I/O bottlenecks: High IOwait, slow writes, especially when logs or backups run during peaks.
  • Actions:
    • Move to faster storage (for example, NVMe-based VPS or dedicated server).
    • Optimise log rotation and background jobs so they do not coincide with spikes.
  • Network or reverse proxy limits: High connection counts, timeouts at the proxy, not at the app.
  • Actions:
    • Tune Nginx worker connections, timeouts and buffering.
    • Use microcaching or full-page caching to reduce dynamic hits.

When you decide that you truly need more resources rather than configuration changes, you can upgrade within the dchost.com portfolio: larger VPS plans for CPU/RAM, dedicated servers for consistent high workload, or colocation if you operate your own hardware.

3. Validate improvements with follow-up tests

Each change you make—whether it is increasing PHP workers, adding Redis, or tuning MySQL—should be followed by a smaller rerun of your load test at the same scale. Compare:

  • Before vs. after latency percentiles
  • Before vs. after CPU/RAM/I/O usage
  • Error rates and timeouts

This iterative approach turns load testing into a feedback loop rather than a one-off exercise.

A Reusable Step-by-Step Blueprint for Load Testing Your Hosting

To make this practical, here is a blueprint you can apply to almost any project hosted on dchost.com, whether on shared hosting, VPS, dedicated or colocation.

  1. Clarify the event and goals
    • Define expected traffic (visitors/hour, RPS, concurrent users) and performance budgets.
    • Write them down so you can verify them later.
  2. Prepare a realistic staging environment
    • Clone production to a staging site on the same type of hosting.
    • Sync the database (anonymised if needed) and key configs.
  3. Set up monitoring and logging
    • Ensure you have CPU, RAM, disk, network and DB metrics visible.
    • Confirm access/error logs are enabled and rotated correctly.
  4. Choose and configure your tool
    • Use k6 for scriptable HTTP/API tests and CI integration.
    • Use JMeter if you want GUI-based complex flows or diverse protocols.
    • Use Locust if your team is comfortable with Python and code-based scenarios.
  5. Design 2–3 key user journeys
    • Define flows for anonymous browsing, search, and checkout or form submission.
    • Assign rough probabilities/weights to each.
  6. Run a smoke test first
    • Start with 5–10 concurrent users to ensure your script and environment work.
    • Fix any errors, broken logins or missing tokens.
  7. Run the main load test
    • Gradually ramp up to your target concurrency over several minutes.
    • Maintain peak load for at least 10–20 minutes while watching metrics.
  8. Optionally run a short stress test
    • Push beyond expected peak by 20–50% to see how the system fails and recovers.
  9. Analyse and act
    • Correlate tool metrics (RPS, latency, errors) with server data.
    • Implement tuning changes or plan a hosting upgrade where necessary.
  10. Repeat on a smaller scale after changes
    • Verify that your adjustments actually improved headroom and stability.

Conclusion: Make Load Testing Part of Your Hosting Routine

Load testing with tools like k6, JMeter and Locust is not reserved for giant tech companies; it is a practical discipline that fits perfectly into the lifecycle of any serious website, e‑commerce store or SaaS project. By designing realistic scenarios, preparing your monitoring, and running structured tests ahead of major campaigns, you dramatically reduce the risk of painful slowdowns or outages at the worst possible time.

At dchost.com, we see the same pattern again and again: teams that treat performance as an ongoing process—testing new features, verifying scaling decisions, and validating changes—enjoy calmer launches and more predictable hosting costs. Combine regular load testing with the best practices from our guides on benchmarking a new VPS and setting up monitoring and alerts, and you will have a hosting stack that is both faster and easier to operate.

If you are planning a traffic spike, campaign or new product launch and want to make sure your infrastructure is ready, you can start by load testing your current dchost.com plan following the blueprint above. If the tests show you need more headroom, our team can help you move to the right VPS, dedicated server or colocation setup without drama—and with the confidence that your next big traffic spike will just look like another normal day.

Frequently Asked Questions

Start from your realistic expectations for the event you are preparing for. Estimate daily visitors, identify the busiest hour and convert that into requests per second and concurrent users. It is usually better to test slightly above your forecast (for example, 20–30% higher) to build some safety margin. Our guide on estimating traffic and bandwidth needs on shared hosting and VPS explains a simple way to turn business forecasts into technical numbers. If you already have historical analytics, use the last big peak as a baseline and add growth on top of it.

It can be safe, but only under strict conditions. For heavy tests, we strongly recommend using a staging or pre‑production environment that mirrors production in terms of hosting type, software versions and architecture. If you must test on production, run during off‑peak hours, keep concurrency conservative, coordinate with stakeholders and monitor closely so you can abort if something goes wrong. Also make sure your load generators run from separate servers, so you do not consume the same CPU and RAM that your real users rely on during the test.

Choose based on your team and use case. k6 is great if your developers are comfortable with JavaScript and you want tests that fit nicely into CI/CD pipelines, especially for HTTP APIs and microservices. JMeter is a good fit when you need a GUI, want to support multiple protocols or have QA engineers who prefer a visual test plan editor. Locust is ideal for Python teams who like code‑centric test definitions and want to express user behaviour in a clean, programmable way. All three can generate significant load from a VPS or dedicated server when configured correctly.

Look at both the load test metrics and your server monitoring. If CPU, RAM or disk I/O are clearly saturated while your application is relatively efficient, upgrading your VPS or moving to a more powerful plan is reasonable. However, if you see slow database queries, N+1 problems, missing indices or heavy plugins, optimisation can often unlock a lot of capacity without changing plans. In practice, teams usually do both: fix the obvious code and query issues, then size their hosting one step above the minimum required to handle peak traffic comfortably.

A good baseline is to run load tests before any major campaign or seasonal spike, after large feature releases that affect critical flows (checkout, signup, search), and whenever you make significant infrastructure changes such as upgrading PHP, switching database engines or moving to a new VPS or dedicated server. Many teams settle on a quarterly or biannual schedule for full tests, with smaller smoke or regression tests integrated into CI/CD. The key is consistency: treat load testing as part of your normal hosting and deployment routine, not just an emergency task before big launches.