web.dev: Web Vitals - discussion (2023-01-17)
The web.dev Web Vitals overview is still the clearest summary of what to measure and why, but teams often get stuck translating it into actionable app work. I'm curious how people connect vitals (LCP/CLS/INP) to a route-first React app where you can render evidence about loading posture and rendering behavior.
How do you instrument vitals so you can correlate a report with a specific route posture (panel, overlay, cache lane)? Do you treat vitals regressions as product changes (layout decisions) or as engineering changes (render strategy)? What did you ship as your first "vitals fix" that actually held up in production?
Comments (18)
Back to latestThe only way we made vitals actionable was attaching context.
If a report doesn't include route evidence (which panel, which overlay), you can't reproduce anything.
We added a small "route context" payload to our vitals logging:
ts
type VitalsCtx = { route: string; panel?: string; overlay?: string; cache?: string };
export function reportVital(name: string, value: number, ctx: VitalsCtx) {
navigator.sendBeacon('/vitals', JSON.stringify({ name, value, ctx, at: Date.now() }));
}
Then the route shell writes panel/overlay/cacheLane and we include it in ctx.
Counterpoint: it's easy to turn vitals into a numbers game and accidentally harm UX.
We had a CLS fix that reduced shift but made the page feel slower because we added too many skeleton constraints.
Yep. The right frame for us was "vitals are a signal" not "vitals are the product".
We still use them, but we always pair a vitals change with a screenshot/video review.
First fix that held up: reserving space for images and ads (explicit dimensions).
It was boring but CLS went down instantly and stayed down.
INP improved when we stopped doing heavy derivations in render on keystrokes.
We stored derived structures where siblings needed consistency and kept leaf decoration local.
We also made the loading posture visible by rendering data-boundary-status and data-streaming keys.
It made it possible to correlate LCP spikes with "which boundary was pending".
A misunderstanding I had: I thought vitals were all about optimization.
They're also about *stability* (layout, input responsiveness) which maps nicely to route contracts and evidence.
We treat regressions as product decisions first: did we add a new overlay, new animation, new panel?
Then engineering decides whether to change render strategy or caching posture.
If you want to keep the team honest, render a "perf evidence" line in debug builds: LCP candidate id, last INP event, etc.
Otherwise perf work turns into vibes.
Docs tie-in: vitals work got easier when we revisited the rendering docs and chose a posture per route.
Related: Performance and Rendering (Best Practices) and Testing and Debugging.
We improved LCP by moving above-the-fold content into a calmer surface and deferring widgets behind boundaries.
The key was making the boundary plan visible so you could see what would load later.
Counterpoint: logging too much context can become privacy-sensitive.
We only log posture keys (panel/overlay/status), not user content.
Same. We treat posture keys like safe metadata and keep everything else out.
The point is reproducibility, not surveillance.
Our first stable CLS fix was reserving space for async panels (like keeping the inspector width stable).
It made the app feel calmer even before the metrics improved.
We added a derived layoutVersion evidence key that changes when we ship layout-affecting UI.
It helped correlate regressions with deployments quickly.
INP got better when we stopped doing expensive JSON parsing on the main thread during input.
We pushed parsing into a boundary/mutation step and stored a derived ready-to-render structure.
A misunderstanding I see: teams chase LCP when the user complaints are actually INP (sluggish input).
web.dev does a good job describing the difference, but you still need route context to act on it.
The moment vitals became a route-story problem instead of a component problem, our perf work got a lot more effective.
If your app has evidence keys, consider including them in perf reports and screenshots.
It makes the "what happened" question much easier than trying to infer it from timing alone.