React Forum
Blog & Guide Responses
Jul 16, 2022 · 04:00 AM

web.dev: Web Vitals - discussion (2022-07-16)

Started by Camille Duarte
externalweb.devperformanceweb vitalsINPLCP

For teams that measure Web Vitals: how do you keep the metrics from becoming a dashboard war instead of something you can act on in code?

Read the guide

What do you instrument in the app (LCP/INP/CLS) and how do you tie it back to specific route surfaces? Do you render any performance signals (render counts, cache lanes, last interaction) so investigations start from UI evidence? And what's your threshold for when optimizing a metric is worth product tradeoffs?

Comments (10)

Back to latest
Theo Bennett
Jul 16, 2022 · 04:06 AM

The only time vitals helped was when we could map them to a specific surface.

"Our INP is bad" isn't actionable.

Dmitri Kline
Jul 16, 2022 · 04:14 AM

We made progress once we treated perf like signals: what rendered, why, and what lane we were in.

If the app can render a minimal audit trail, you stop guessing and start fixing.

Related: Performance and Rendering Best Practices and Performance Audit Trails (Deep Dive #6).

Salma Qureshi
Jul 16, 2022 · 04:22 AM

We started capturing perf entries and writing a tiny signal during investigations:

ts
new PerformanceObserver((list) => {
  for (const e of list.getEntries()) {
    setPerfSignal({ name: e.name, start: e.startTime, dur: e.duration });
  }
}).observe({ type: 'largest-contentful-paint', buffered: true });

Not as a permanent feature, but it made it possible to correlate "LCP got worse" with what the route actually rendered.

Hana Saito
Jul 16, 2022 · 04:31 AM

Counterpoint: chasing metrics can make UX worse (skeletons everywhere, delayed interactions).

We optimize for user-perceived calmness first and treat vitals as a constraint, not the goal.

Camille Duarte
Jul 16, 2022 · 04:36 AM

Agree. The best perf work we did was reducing jitter during route transitions, not shaving 20ms off LCP.

Metrics were useful only when they pointed to a specific surface regression.

Hana Saito
Jul 16, 2022 · 04:39 AM

Yep. If it turns into KPI theater, the team starts gaming the metric instead of fixing the product.

Maya Henson
Jul 16, 2022 · 04:44 AM

INP improved for us when we stopped doing expensive derived work on every keystroke and started committing at boundaries.

Keira Santos
Jul 16, 2022 · 04:52 AM

We also had a ton of "perf regressions" that were actually cache posture changes.

Once we rendered cache lane signals, it was obvious when someone flipped from stale-ok to bypass and made a screen feel snappy but noisy.

Amina Farouk
Jul 16, 2022 · 05:03 AM

If you can't reproduce a perf complaint from a screenshot + signals, you're going to spend days re-running and guessing.

Jules Kwon
Jul 16, 2022 · 05:15 AM

We used route-level perf posture keys during investigations (like perf=profile), then turned it off.

Making it route state meant anyone could reproduce it by sharing the URL.

Inez Park
Jul 16, 2022 · 05:27 AM

Biggest win: treat perf fixes like contract changes with signals. Otherwise people optimize and you can't tell what changed.

Benji Rios
Jul 16, 2022 · 05:41 AM

Measure, but always attach it to a surface and a hypothesis. Dashboards without a link to code are just anxiety.