web.dev: Web Vitals - discussion (2022-07-16)
For teams that measure Web Vitals: how do you keep the metrics from becoming a dashboard war instead of something you can act on in code?
What do you instrument in the app (LCP/INP/CLS) and how do you tie it back to specific route surfaces? Do you render any performance signals (render counts, cache lanes, last interaction) so investigations start from UI evidence? And what's your threshold for when optimizing a metric is worth product tradeoffs?
Comments (10)
Back to latestThe only time vitals helped was when we could map them to a specific surface.
"Our INP is bad" isn't actionable.
We made progress once we treated perf like signals: what rendered, why, and what lane we were in.
If the app can render a minimal audit trail, you stop guessing and start fixing.
Related: Performance and Rendering Best Practices and Performance Audit Trails (Deep Dive #6).
We started capturing perf entries and writing a tiny signal during investigations:
ts
new PerformanceObserver((list) => {
for (const e of list.getEntries()) {
setPerfSignal({ name: e.name, start: e.startTime, dur: e.duration });
}
}).observe({ type: 'largest-contentful-paint', buffered: true });
Not as a permanent feature, but it made it possible to correlate "LCP got worse" with what the route actually rendered.
Counterpoint: chasing metrics can make UX worse (skeletons everywhere, delayed interactions).
We optimize for user-perceived calmness first and treat vitals as a constraint, not the goal.
Agree. The best perf work we did was reducing jitter during route transitions, not shaving 20ms off LCP.
Metrics were useful only when they pointed to a specific surface regression.
Yep. If it turns into KPI theater, the team starts gaming the metric instead of fixing the product.
INP improved for us when we stopped doing expensive derived work on every keystroke and started committing at boundaries.
We also had a ton of "perf regressions" that were actually cache posture changes.
Once we rendered cache lane signals, it was obvious when someone flipped from stale-ok to bypass and made a screen feel snappy but noisy.
If you can't reproduce a perf complaint from a screenshot + signals, you're going to spend days re-running and guessing.
We used route-level perf posture keys during investigations (like perf=profile), then turned it off.
Making it route state meant anyone could reproduce it by sharing the URL.
Biggest win: treat perf fixes like contract changes with signals. Otherwise people optimize and you can't tell what changed.
Measure, but always attach it to a surface and a hypothesis. Dashboards without a link to code are just anxiety.