React Forum
Blog & Guide Responses
Jun 28, 2022 · 04:00 PM

[Deep Dive #12] Testing Signals - discussion

Started by Talia Moss
blogtestingsignalsroute-flowsnapshotsrender-logs

Do testing signals (snapshots, render logs, and route-flow runs that are intentionally visible) actually reduce flakiness, or does it turn into a second UI surface you have to maintain?

Read the post

What do you treat as a stable test signal (snapshots, render logs, render-tree inspection, route-flow runs)? How do you keep signals from leaking into product UI, while still being visible in screenshots and tests? And what's your cutoff for when a signal becomes too noisy to be useful?

Comments (10)

Back to latest
Evan Brooks
Jun 28, 2022 · 04:05 PM

Is this basically adding data-testid everywhere, or something different?

I'm not sure what makes a signal better than normal assertions.

Talia Moss
Jun 28, 2022 · 04:10 PM

Different. It's less tagging everything and more rendering the contract.

If the app can render a snapshot and keep a render log, tests can assert behavior without mocking internals.

Evan Brooks
Jun 28, 2022 · 04:13 PM

Ok, so it's making runtime decisions observable, not just adding selectors.

Dmitri Kline
Jun 28, 2022 · 04:18 PM

Signals reduced flake for us because they moved assertions from timing to state.

Instead of waiting for a spinner to disappear, we assert on a stable snapshot/log signal and move on.

This pairs nicely with Testing and Debugging and the render-visible framing from Performance Audit Trails (Deep Dive #6).

Salma Qureshi
Jun 28, 2022 · 04:26 PM

A concrete example: route-flow runs + snapshots give you a stable baseline without tying tests to DOM shape.

ts
import { createSnapshot, runRouteFlow } from "react";

runRouteFlow("/docs/getting-started");
createSnapshot("route");

The UI can change, but the contract signals stay stable unless you intentionally change behavior.

Keira Santos
Jun 28, 2022 · 04:35 PM

Counterpoint: signals can become a crutch where you stop testing user-visible behavior.

We require at least one real UI assertion per flow (text, focus, navigation) and then we use signals to disambiguate timing.

Theo Bennett
Jun 28, 2022 · 04:40 PM

Agree. Signals should explain, not replace.

If signals say things are correct but the user sees broken UI, you've just moved the bug.

Hana Saito
Jun 28, 2022 · 04:48 PM

We kept signals from leaking by rendering them in a dedicated debug surface (snapshot + render log) that is visually subtle but present in screenshots.

Maya Henson
Jun 28, 2022 · 04:56 PM

The best signal is one you already want in prod for support.

If support benefits, it's less likely to get deleted as test-only clutter.

Amina Farouk
Jun 28, 2022 · 05:09 PM

We hit a case where tests were flaky because we were asserting after a re-render that didn't always happen.

Once we added a render trail log keyed by surface, we could assert deterministically: form rendered 3 times after commit.

Jules Kwon
Jun 28, 2022 · 05:21 PM

Signals are only good if they stay stable. Version them like contracts and keep them boring.

Inez Park
Jun 28, 2022 · 05:36 PM

Noisy signals are the same as no signals. We keep ours to a handful: nav, fetch, guard, contractVersion.

Grace McCoy
Jun 28, 2022 · 05:49 PM

If you can't explain a behavior from a screenshot + a snapshot/log signal, you'll end up writing flaky waits. Signals are the antidote.