Performance16 April 20266 min read

Core Web Vitals: What to Monitor and How to Fix Regressions

Performance regressions almost never ship intentionally. They arrive with a seemingly innocent change: a new image carousel, a third-party tag, a font swap. By the time somebody notices the drop in conversions, the regression is two weeks old and buried under 40 deploys. Continuous Core Web Vitals monitoring flips the loop — you see the regression the same day, on the specific deploy that caused it.

PerformanceUptime MonitoringWebsite MonitoringApi MonitoringCron Job Monitoring
Performance

Google ranks sites by real-user performance. LCP, FCP, CLS, TTFB — these aren't abstract numbers, they're conversion killers when they drift. Here's how to monitor them continuously and catch regressions before they ship to users.

What Core Web Vitals actually measure

The six metrics you care about, in plain terms:

LCP (Largest Contentful Paint) — how long until the biggest visible element finishes rendering. Good < 2.5s. • FCP (First Contentful Paint) — how long until any pixel renders. Good < 1.8s. • CLS (Cumulative Layout Shift) — how much the page jumps around as things load. Good < 0.1. • TTFB (Time to First Byte) — pure server/CDN latency. Good < 0.8s. • TBT (Total Blocking Time) — how long the main thread was blocked during load. Good < 200ms. • Speed Index — a synthesized 'how fast did it feel' score. Good < 3.4s.

These are the numbers Google uses for search ranking and that Chrome reports via the Web Vitals extension.

Why synthetic lab scores lie

Lighthouse-CI runs your page on a simulated mid-tier device with a simulated 3G connection. It's great for ranking relative changes, but it misses a lot of real failure modes. Third-party scripts that lazy-load based on cookie consent, A/B test branches that only render for half of users, video autoplay that fails on mobile Safari — Lighthouse never sees these because it runs clean.

A full-browser Playwright check — a real Chromium instance navigating your page with whatever cookies and headers you specify — catches the failures Lighthouse misses. You see the *actual* LCP when the cookie banner loads. You see the CLS when the late-loading ad slot shifts everything down by 48 pixels. SpeedTest runs both so you get the CI-ranking baseline and the real-browser truth.

Reading the graphs: gradual vs. step-change regressions

Two shapes of regression tell you different stories.

A gradual climb in LCP over two weeks usually means a single resource is growing — a product-image feed that keeps adding items without pagination, an analytics bundle that keeps getting features. The fix is usually in your dependencies, not your code.

A step-change at a single deploy — TTFB jumps from 400ms to 1200ms overnight — is a direct code-level cause. Correlate the timestamp with your DeployLog entries, find the commit, revert or fix.

If you see step-changes that only affect one metric (e.g., CLS jumps but LCP is flat), you've almost certainly got a layout issue in one specific component. If everything degrades together, it's infrastructure — a slower upstream, a CDN purge that's not repopulating.

Per-metric alert thresholds you actually want

One threshold per metric, alerting on a crossed 'poor' boundary rather than a flashy 'regressed by 10%':

• LCP > 4.0s — alert • FCP > 3.0s — alert • CLS > 0.25 — alert • TTFB > 1.8s — alert • TBT > 600ms — alert

Pair these with a 'warn' threshold at the Google 'needs improvement' line so you get a heads-up before the page is officially considered poor. Don't alert on every percent — you'll mute the channel in a week. Alert on boundary crossings and sustained regressions (4+ consecutive checks).

What to do when CWV regresses after a deploy

The playbook when a red metric fires right after a release:

1. Open the Playwright trace for the failed run. SpeedTest stores the full network waterfall and long-task timeline. 2. Find the new or changed resource. Look for new script tags, larger image payloads, third-party embeds added today. 3. If it's a third-party (analytics, chat widget), check their status page and consider `async` or `defer`. 4. If it's your code, correlate with DeployLog to find the specific commit and roll back or hotfix. 5. Add the failure pattern to your pre-deploy checklist so the next similar regression is caught in review.

This article is available across the supported locale routes — use the language switcher above to change.

Feature Guide

Uptime Monitoring

AlertsDock gives teams uptime monitoring for websites, APIs, TCP checks, DNS checks, SSL expiry, and fast alert routing without enterprise overhead.

Read guide

Alternative Page

Better Stack Alternative

Compare AlertsDock with Better Stack for teams that want a more focused monitoring product covering uptime, cron jobs, status pages, and webhooks.

See comparison
AD
AlertsDock Team
16 April 2026
Try AlertsDock free