Technology

How to Properly Test Your Website Speed

Why Proper Speed Testing Matters More Than a Single “Score”

When teams discuss a slow website in a planning meeting, the first instinct is often to paste a PageSpeed Insights score into the chat and panic about the number. But a single score rarely explains why the site feels slow, which users are affected, or what to fix first. Proper website speed testing is about building a repeatable process that connects browser-side metrics (what tools like GTmetrix, PageSpeed Insights and WebPageTest show) with server-side metrics (what your hosting, VPS or dedicated server is actually doing).

In this guide, we will walk through how to use GTmetrix, PageSpeed Insights and WebPageTest the right way, how to avoid common testing mistakes, and how to combine those results with hosting metrics like CPU, RAM, disk I/O and TTFB. As the team behind dchost.com, we will also share how we read these tools in real projects and map them to practical changes on shared hosting, VPS, dedicated servers and colocation setups. By the end, you will have a clear, step‑by‑step way to test your website speed and turn raw numbers into concrete actions.

What Website Speed Tests Actually Measure

Before diving into tools, it helps to understand what is being measured. Modern speed tests are not just about “page load time” anymore; they break the experience into several milestones that better reflect what real users feel.

  • TTFB (Time to First Byte): How long the browser waits before the server sends the first byte of HTML. This is heavily influenced by your hosting, PHP, database and caching layers.
  • LCP (Largest Contentful Paint): When the largest visible element (often a hero image or headline) becomes visible. This is a key Core Web Vitals metric.
  • CLS (Cumulative Layout Shift): How much the layout jumps around as resources load. This affects how stable your page feels.
  • INP/TBT (Interaction latency / Total Blocking Time): How responsive the page is to user input while scripts are running.
  • Fully Loaded / Onload Time: When the main page finishes loading and background requests settle down.

Tools like GTmetrix and WebPageTest focus on detailed waterfalls and filmstrips of how your page loads. PageSpeed Insights adds an extra layer: field data from real Chrome users, plus a synthetic “lab test” run on a standard test device.

On the hosting side, we have a different but related set of numbers: CPU usage, RAM usage, disk I/O, network latency, cache hit ratio, database query times and more. Our article on how server choices impact Core Web Vitals like TTFB, LCP and CLS dives deep into those relationships. A proper speed test connects these two worlds: browser metrics and server metrics.

Setting Up a Clean, Repeatable Speed Test

The biggest reason people get confused by speed tools is inconsistent testing. They change several variables at once and then cannot tell which tweak actually helped. Before worrying about scores, establish a clean testing procedure.

1. Choose the Right URLs

Do not just test the homepage once and call it a day. Pick a small set of URLs that represent real user journeys:

  • Homepage
  • Key landing page (for ads or SEO)
  • Product or article page template
  • For e‑commerce: cart and checkout pages (if they are public or can be scripted)

Keep this set stable over time so you can compare before/after results when you change themes, plugins, hosting plans or CDN settings.

2. Test Logged Out, as an Anonymous User

Always test in logged‑out mode, without admin bars or debug toolbars. These add requests and can change caching behavior. Use public URLs and avoid URLs with personal query parameters (like preview links in a CMS).

3. Control Test Location and Device

Speed is relative to where your users are. When configuring GTmetrix or WebPageTest, select a test location close to your primary audience (for example, same continent or region). Then, pay attention to device type:

  • Desktop tests are useful to see raw backend performance and caching behavior.
  • Mobile tests (especially with throttled connections) are more realistic for SEO and user experience.

PageSpeed Insights automatically tests with a standardized mobile profile, which is why its scores often look lower than pure desktop tools.

4. Use Multiple Runs and Average the Results

Web performance is noisy. DNS resolution, network routing and shared hosting neighbors can all introduce variability. Instead of trusting a single run:

  • Run at least 3 tests per URL.
  • Look at the median or average for key metrics like TTFB and LCP.
  • Ignore occasional outliers unless they happen frequently.

5. Test with and Without Caching/CDN When Needed

If you are using page caching, object caching or a CDN, it can be useful to test both:

  • First‑view (cold cache): Simulates the first visitor after a cache clear. This stresses your PHP and database.
  • Repeat‑view (warm cache): Simulates most real visitors when caching is working properly.

WebPageTest explicitly lets you run first‑view and repeat‑view tests. GTmetrix and PageSpeed Insights can be used repeatedly to see the effect of caching as your CDN warms up. Our guide on what a CDN is and when you really need one explains how CDN caching changes these test results.

GTmetrix: Reading Waterfalls and Grades the Right Way

GTmetrix is one of the most practical tools for developers and site owners because it combines a modern metric set (Core Web Vitals) with an easy‑to‑read waterfall of every request.

Running a GTmetrix Test

  1. Open GTmetrix and paste your URL.
  2. Click the settings icon (if available) to choose test location, device (desktop/mobile) and connection speed.
  3. Start the test and wait for the report.

The report typically contains:

  • A performance grade and structure grade.
  • Key metrics like LCP, TBT, CLS, TTFB and fully loaded time.
  • Recommendations grouped as “Top Issues”.
  • The Waterfall tab, which is the most valuable part for diagnostics.

Understanding the GTmetrix Waterfall

The waterfall shows every request in the order the browser makes them. Each bar is split into colored segments: DNS lookup, connection, SSL negotiation, waiting (TTFB), content download and so on. Some quick patterns to look for:

  • Long blue “waiting” bars for the HTML document usually indicate slow server processing. This is where hosting metrics, PHP performance and database queries matter most.
  • Many small JS/CSS files cause too many round trips, hurting both TTFB for those assets and overall load time.
  • Large image files dominate the transfer size and slow down LCP.
  • Third‑party scripts (analytics, chat, ads, fonts) can show up as slow domains outside your control.

If you consistently see a long TTFB, it is worth pairing your GTmetrix report with our article about fixing high TTFB on WordPress and PHP sites from the hosting side.

GTmetrix Scores vs. Real Priorities

GTmetrix grades (A/B/C) are helpful but should not be the only decision driver. In real projects, we prioritize:

  • Getting LCP consistently under 2.5 seconds for key pages.
  • Keeping CLS near 0 to prevent jumpy layouts.
  • Reducing TTFB below ~600 ms for most users; lower is better.
  • Eliminating obviously oversized images and render‑blocking scripts.

Sometimes you can have a “B” grade but a very fast user experience because the issues it flags are minor. The waterfall and key metrics tell the real story.

PageSpeed Insights: Core Web Vitals and Field Data Explained

PageSpeed Insights (PSI) is commonly used because it combines lab tests with anonymized field data from real Chrome users. This is particularly important for SEO because Core Web Vitals are part of search ranking systems.

Lab Data vs. Field Data

When you run a PSI report, it shows two main blocks for your URL:

  • Field data (Core Web Vitals assessment): Collected over the last 28 days from real users, grouped by mobile and desktop. Metrics include LCP, FID/INP and CLS.
  • Lab data (Lighthouse test): A synthetic test run on a standardized hardware and network profile, usually mobile‑first.

Field data answers: “What are actual users experiencing over time?” Lab data answers: “What does this page do right now under controlled conditions?” Both are valuable; big gaps between them can reveal issues like heavy ads for some users, specific countries affected by latency, or only logged‑in users suffering.

Reading PSI Opportunities and Diagnostics

Below the score, PSI lists “Opportunities” (with estimated savings) and “Diagnostics”. Some common ones:

  • Reduce unused JavaScript/CSS: Too many scripts or large frameworks for a simple page.
  • Serve images in next‑gen formats: Suggests WebP/AVIF instead of JPEG/PNG.
  • Reduce initial server response time: Directly tied to TTFB and your hosting stack.
  • Eliminate render‑blocking resources: CSS and JS that delay first paint and LCP.

Not every recommendation needs to be fixed immediately. As a hosting provider, we usually prioritize server response time, caching, and major image issues first, then refine JavaScript and CSS loading.

For a deeper hosting‑side view of these metrics, see our article on hosting infrastructure and Core Web Vitals, where we map TTFB, LCP and CLS directly to server settings.

WebPageTest: Deep‑Dive Performance Analysis

WebPageTest is the tool we use when we want a performance engineer‑level view of a site. It offers advanced features such as scripted journeys, video filmstrips, and fine‑grained control over devices, locations and connection profiles.

Basic WebPageTest Usage

  1. Go to WebPageTest and paste your URL.
  2. Choose a location and browser/device (for example, mobile Chrome).
  3. Select the number of test runs (we typically start with 3).
  4. Enable First‑view and Repeat‑view if you want to analyze caching behavior.
  5. Start the test and wait for the summary.

You will get a rich set of outputs: waterfall charts, visual progress, filmstrip comparisons, and grade cards for key metrics like TTFB, LCP and “Speed Index”.

When WebPageTest Shines

We reach for WebPageTest in scenarios such as:

  • Comparing with/without CDN: Same URL, same server, but different DNS and CDN setups.
  • Testing multi‑step flows: For example, loading the cart, then the checkout, to see how cached each step is.
  • Analyzing visual progress: The filmstrip view shows when above‑the‑fold content appears versus when the page is fully ready.
  • Evaluating different hosting plans or regions: By changing test locations and DNS/servers, you can see measurable differences in latency and TTFB.

WebPageTest is also useful for advanced debugging. If a specific third‑party script is slowing down your page, its request waterfall and domain breakdown make it very clear.

Connecting Tool Results to Hosting Metrics

Synthetic tools show what the browser experiences. To fix the underlying problem, you must also look at what your hosting environment is doing at the same time. On dchost.com, we often analyze GTmetrix/PSI/WebPageTest side by side with server metrics from shared hosting, VPS or dedicated servers.

Key Hosting Metrics to Watch

  • CPU usage: High CPU usage during traffic peaks can slow PHP execution and database queries, increasing TTFB and LCP.
  • RAM usage: Insufficient RAM leads to swapping and slow disk access; caches also become less effective.
  • Disk I/O and IOPS: Slow disks or saturated I/O cause long query times and slow file reads, visible as long “waiting” segments in waterfalls.
  • Network latency and bandwidth: High latency between server and users affects connection time; limited bandwidth slows down large asset delivery.
  • PHP worker / process limits: If all PHP workers are busy, new requests must wait, increasing TTFB under load.
  • Database query performance: Slow or unindexed queries directly increase TTFB and LCP for dynamic pages.

Mapping Browser Metrics to Server Metrics

Here is how we usually connect specific synthetic metrics to hosting‑side data:

  • High TTFB in GTmetrix/PSI → Check CPU, PHP process limits, database slow query logs, and caching configuration.
  • Good TTFB but slow LCP → Likely front‑end issues: large images, fonts, CSS/JS; server is fine, focus on assets and CDN.
  • Speed is good at low traffic, bad during campaigns → Check CPU/RAM saturation, connection counts, and disk I/O during the campaign window; may require a higher‑tier plan or VPS scaling.
  • Fast in one country, slow in another → Latency and routing; consider a CDN or region‑appropriate server location.

Our article on hosting scaling for traffic spikes and big campaigns covers how to plan capacity so that your speed does not collapse exactly when you need it most.

When You Might Need to Upgrade Hosting

If speed tests consistently show high TTFB even after optimizing code and caching, it may be a capacity issue. Typical upgrade signals include:

  • CPU usage is near 100% during normal peaks.
  • RAM usage is consistently high and swap is being used.
  • Disk I/O graphs spike during busy times.
  • Database queries are slow despite indexing and query optimization.

In these cases, moving from shared hosting to a VPS or from a small VPS to a larger VPS or dedicated server at dchost.com often makes a visible difference in TTFB and LCP. Our team can help you interpret your current metrics and choose the right size and type of plan based on real data, not guesses.

Practical Testing Routines That Actually Work

Instead of testing randomly whenever someone feels the site is slow, it is better to establish a routine. This keeps performance under control and gives you baselines to compare against after changes.

1. Baseline Tests After Launch or Migration

Right after launching a new site or migrating to a new hosting plan, run a full test set:

  • GTmetrix: desktop and mobile for 2–3 key URLs.
  • PageSpeed Insights: at least once per key URL.
  • WebPageTest: one or two URLs, with first‑view and repeat‑view, from your main user region.

Store these results in a document or performance log. Our hosting‑side SEO and performance checklist for new website launches is a good companion here.

2. Monthly Health Checks

Once a month (or quarter, for smaller sites), repeat the tests for the same URLs and locations. Compare:

  • TTFB and LCP trends over time.
  • Changes in field data on PageSpeed Insights (Core Web Vitals status).
  • Any new slow third‑party scripts in waterfalls.

If you see gradual deterioration, investigate plugin/theme changes, new tracking scripts or increased data sizes in your CMS.

3. Pre‑Campaign and Post‑Campaign Testing

Before a big marketing campaign or sale, run a more intense round of tests and double‑check hosting capacity. It is common for teams to prepare creatives and ad budgets but forget to validate that the site can handle traffic. Use:

  • GTmetrix and WebPageTest to confirm caching and TTFB are healthy.
  • Hosting metrics to ensure CPU, RAM and I/O have enough headroom.

After the campaign, review speed tests and server graphs to see where the bottlenecks appeared. This learning helps you choose the right scaling strategy next time. Again, our hosting scaling checklist provides a structured way to do this.

4. After Major Code or Design Changes

Any significant change to your theme, plugins, front‑end framework or checkout flow deserves a fresh set of tests. Compare waterfalls, TTFB, LCP and CLS before and after the change. If the design is visually nicer but results in worse Core Web Vitals, you can decide whether the trade‑off is worth it or if more optimization is needed.

Real‑World Scenarios: How We Use These Tools Together

To make this more concrete, here are a few example scenarios based on patterns we often see with dchost.com customers.

Scenario 1: WooCommerce Store Before a Seasonal Sale

A store owner is preparing for a big seasonal sale. Traffic is expected to double. We usually recommend:

  • GTmetrix and WebPageTest on the homepage, category pages and checkout.
  • PageSpeed Insights to verify Core Web Vitals are “Good” on mobile.
  • Server metrics during a small load test or current traffic peaks to see CPU/RAM headroom.

If TTFB is already marginal and resource usage is high, we might suggest upgrading to a VPS or tuning PHP, MySQL and caching. Combining these browser‑side and server‑side views often avoids slowdowns exactly when campaigns go live.

Scenario 2: Content‑Heavy Blog with Global Readers

A news or blog site with many images and articles has readers across several continents. Speed tests from a single region look fine, but readers abroad complain about slowness. In this case, we:

  • Run WebPageTest from multiple locations to see TTFB and download times across regions.
  • Check if a CDN is in use and how well it is caching images and static assets.
  • Use GTmetrix waterfalls to confirm that images are optimized and not oversized.

If latency dominates, a CDN becomes a high‑impact improvement. Our article on deciding when you really need a CDN based on traffic and location is a useful reference in this scenario.

Scenario 3: SaaS Dashboard with Heavy JavaScript

A SaaS app has a complex dashboard with heavy JavaScript and API calls. The backend runs on a VPS with plenty of CPU and RAM, but users complain about sluggishness on older laptops.

Here, speed tests reveal:

  • Good TTFB but high Total Blocking Time (TBT) and poor INP.
  • Large bundle sizes and long script parsing/execution times.
  • Minimal benefit from further server upgrades because the bottleneck is front‑end code.

In this case, the right response is to split bundles, reduce blocking scripts and optimize front‑end logic, not necessarily to change hosting. Testing tells you where to invest effort.

How We Support Speed Optimization at dchost.com

As a hosting provider, we see both sides of the performance story every day: what speed tools show and what servers are actually doing. When customers share GTmetrix, PageSpeed Insights or WebPageTest reports with us, our usual process is:

  1. Identify whether the main bottleneck is server‑side (TTFB, CPU, disk I/O) or front‑end (images, JavaScript, CSS, third‑party scripts).
  2. Correlate test timestamps with server metrics from the shared hosting account, VPS or dedicated server.
  3. Check key PHP and database settings (for example, memory limits, process pools) that can affect TTFB and throughput.
  4. Recommend a mix of configuration tweaks, caching strategies and, if necessary, plan or architecture changes.

If you are planning a new site or a migration, our articles on server‑side optimizations for WordPress and on launch‑time hosting performance checks are useful starting points. From there, we can help you pick the right combination of shared hosting, VPS, dedicated server or colocation at dchost.com based on your budget and performance goals.

Pulling It All Together: A Calm, Data‑Driven Speed Strategy

Properly testing website speed is not about chasing a single perfect score. It is about creating a reliable process that combines synthetic tests (GTmetrix, PageSpeed Insights, WebPageTest) with the reality of your hosting environment (CPU, RAM, I/O, TTFB and network latency). When you test consistently, from the right locations and devices, and you know how to read waterfalls and Core Web Vitals, suddenly performance becomes less of a mystery and more of a manageable engineering problem.

From the dchost.com side, our role is to ensure that the server layer is not your bottleneck. We help you interpret test results, adjust PHP and database settings, tune caching, and, when necessary, upgrade or redesign your hosting architecture so that your site stays fast under real traffic. If you would like a second pair of eyes on your current GTmetrix or PageSpeed Insights reports, or you are wondering whether a move to VPS, dedicated server or colocation would actually improve your LCP and TTFB, you can reach out to our team. Together we can turn raw speed metrics into a practical plan for faster, more reliable websites.

Frequently Asked Questions

For most sites, monthly speed tests on key pages are a good baseline, plus extra tests after major changes. Run a full set of checks whenever you change themes, add heavy plugins, switch CDNs, or migrate hosting. Before big marketing campaigns or seasonal sales, test more aggressively: GTmetrix and WebPageTest on landing, product and checkout pages, combined with hosting metrics to verify CPU, RAM and disk I/O headroom. The goal is to catch regressions early and ensure your site will stay fast under the specific traffic patterns you expect.

Each tool has a different strength, so the best approach is to use them together. GTmetrix is excellent for everyday debugging and quick waterfall analysis. PageSpeed Insights is critical for understanding Core Web Vitals and what real users experience over time, especially on mobile. WebPageTest is ideal for deep‑dive investigations, multi‑location testing and comparing with/without CDN or different architectures. At dchost.com we typically start with GTmetrix and PageSpeed Insights, and bring in WebPageTest when we need more advanced diagnostics.

A low PageSpeed Insights score does not always mean users have a bad experience. PSI runs a synthetic test on a standardized mobile profile and can be stricter than real‑world conditions. Focus on the Core Web Vitals assessment (field data): if LCP, INP/FID and CLS are in the “Good” range for most users, that matters more than a single lab score. Still, PSI’s “Opportunities” can highlight optimizations worth doing, such as better image formats or less JavaScript. If you are unsure, share your reports with our team so we can correlate them with hosting‑side metrics like TTFB, CPU and disk I/O.

CPU and RAM directly influence how quickly your server can generate responses. If CPU is saturated or PHP processes are limited, TTFB will increase, which you will see in GTmetrix, PageSpeed Insights and WebPageTest. Insufficient RAM can cause swapping and reduce the effectiveness of caches, again raising TTFB and sometimes LCP. Disk I/O also plays a role, especially for database‑heavy sites. When speed tests show persistent high TTFB and slow server response, checking CPU, RAM, disk I/O and PHP/database configuration on your hosting, VPS or dedicated server is often the key to real improvement.

Yes. If you have visitors from different regions, testing from only one location can hide serious latency issues. Run WebPageTest or GTmetrix from at least two or three regions that match your main audiences and compare TTFB and overall load times. If the site is fast close to the server but slow elsewhere, a well‑configured CDN or regionally appropriate hosting location will usually help. Our guide on deciding when you really need a CDN explains how global traffic patterns should influence both your testing strategy and your hosting design.