Performance · 11 min read

Core Web Vitals in 2026: what still matters, what changed

LCP, INP (which replaced FID in 2024), and CLS in 2026 thresholds, real-device testing, and how FPWS measures performance on real client traffic.

Format
Article
Updated
Mar 19, 2026
Read time
11 min read

TL;DR

Core Web Vitals still matter in 2026 as both a ranking signal and a conversion lever. LCP target is 2.5 seconds, INP target is 200 milliseconds (replaced FID in March 2024), CLS target is 0.1. The right measurement source is field data from real users via the web-vitals package, not Lighthouse lab runs. Median LCP on tracked FPWS client builds is 1.4 seconds at p75 on mobile.

01

Yes, Core Web Vitals still matter

Core Web Vitals in 2026 are both a confirmed Google ranking signal as part of page experience and a measurable conversion lever. The three current metrics are Largest Contentful Paint at 2.5 seconds or under, Interaction to Next Paint at 200 milliseconds or under (replaced First Input Delay in March 2024), and Cumulative Layout Shift at 0.1 or under. Pass all three at the seventy-fifth percentile of real-user traffic and the page is rated 'good.'

There has been a steady drumbeat of 'CWV does not matter anymore' takes since the rollout. They are wrong. The signal is small relative to content quality and link authority on most queries, but it is real, and it is the single most measurable lever a developer controls.

The bigger reason CWV matters is conversion. On the FPWS client base in 2026, every one second improvement in LCP at the seventy-fifth percentile correlates with a measurable bump in form submission rate. Faster sites convert better. That has been true since the early 2000s and it is still true.

Core Web Vitals are also the cleanest external read on whether the engineering team is paying attention. A site that fails CWV on mobile in 2026 is a site that has not been instrumented or not been maintained. That is a tell.

02

INP, the metric that replaced FID

First Input Delay was retired in March 2024 and replaced by Interaction to Next Paint. The change is significant if you have not caught up. FID measured only the delay from the first user input to the start of the browser's response. INP measures the full latency from any interaction to the next visual update on screen, across the whole session.

INP is harder to pass because it captures slow click handlers, slow form interactions, slow scroll responses, and slow modal opens, not just the first input. Sites that passed FID at 100 milliseconds often fail INP at 200 milliseconds because their interaction handlers are bloated.

The fix is usually breaking up long tasks, deferring non-critical JavaScript, and using requestIdleCallback or scheduler.yield where supported. React 19's transition APIs help. Server Components in Next.js 16 help even more by removing entire categories of client-side work.

03

Lab data lies, field data tells the truth

Lighthouse runs are lab data: a synthetic device, a synthetic network, a single page load. Real users are on real devices on real networks doing real things. The Core Web Vitals score that ranks your site is field data, not lab data. Use the web-vitals JavaScript package to send LCP, INP, and CLS from real user sessions to your analytics endpoint, and report on the seventy-fifth percentile of those values, not the median or the lab number.

Most teams optimize Lighthouse scores and wonder why their CrUX report still shows 'needs improvement.' Lighthouse runs on a Moto G4 emulation over throttled 4G. Your real users are on a mix of devices ranging from a six-year-old budget Android over LTE to a current iPhone over fiber. The distribution is wide. The lab number tells you almost nothing about the field number.

We instrument every FPWS client build with the web-vitals package, send the readings to Vercel Analytics or a custom endpoint depending on the stack, and report the seventy-fifth percentile of LCP, INP, and CLS by device class and connection type each month. That is the number that ranks the site and the number that correlates with conversion.

Lighthouse still has a place. It is the right tool for catching regressions on a single page in CI. It is not the right tool for measuring real-user performance.

04

How FPWS hits the targets

We write to Core Web Vitals targets from the first commit, not at the end. Median LCP on tracked FPWS client builds is 1.4 seconds at p75 on mobile, well under the 2.5 second threshold. The methods are not exotic.

For LCP: we identify the LCP element during design, usually a hero image or hero text, and ensure it is server-rendered and not blocked by JavaScript. Hero images use Next.js Image with priority set, AVIF or WebP, sized correctly for the viewport, and served from Vercel's image optimizer. Hero fonts are subset and self-hosted with font-display swap. The HTML response includes the LCP element in the initial document so it can paint as soon as the network delivers it.

For INP: we keep client-side JavaScript thin. Server Components handle anything that does not need interactivity. Interactive components are isolated, lazy-loaded where they live below the fold, and audited for long synchronous handlers. Third-party scripts are loaded with next/script's afterInteractive or lazyOnload strategy depending on what they do.

For CLS: every image has explicit width and height, fonts use size-adjust to match the fallback metric, advertising slots reserve space, and dynamic content insertions use minimum height containers so the layout cannot shift when the content arrives.

05

Real device testing, not just emulators

Chrome DevTools emulation lies. The CPU throttle is a multiplier, not a real device. Network throttling is a fake bottleneck, not real LTE jitter. Battery state, thermal throttling, and OS-level memory pressure do not exist in DevTools. Sites that look fast in emulation routinely fall over on a real mid-range Android phone.

We test every client build on a small fleet of real devices: a current iPhone, a current mid-range Android, a four-year-old budget Android, and a current iPad. We use BrowserStack for the long tail of devices we do not own physically. The tests run before launch and after every major release.

The four-year-old Android is the one that finds the bugs. If LCP is 4.5 seconds on a Pixel 4a over a real LTE connection, the site fails for the bottom third of mobile users in the United States. Lighthouse will not catch that.

06

Monitoring after launch

Performance regressions are silent. A new third-party tag, a new image without sizing, a new client component pulled into a previously-static page, any of these can blow past the CWV threshold in a single deploy without anyone noticing for weeks.

We instrument every FPWS client site with the web-vitals package reporting to Vercel Analytics or a custom endpoint, with a monthly report comparing this month's p75 numbers to last month's. Anything sliding more than ten percent triggers an investigation. Anything dropping below the 'good' threshold triggers a fix that month.

The Search Console Core Web Vitals report is also worth checking weekly. It is slower than RUM, twenty-eight days of data, but it is the same data Google uses for ranking, so it is the source of truth for what Google sees.

Questions

Answered below.

  • Interaction to Next Paint, INP, replaced First Input Delay as a Core Web Vital in March 2024. INP measures the full latency from any user interaction to the next visual update on screen, across the whole session. The good threshold is 200 milliseconds or under at the seventy-fifth percentile of real user data.

Want this work done for you?

Let's talk.