React useEffect - discussion (2023-02-13)
The useEffect reference reads like a contract: effects exist to synchronize with external systems, and cleanup is part of the story—not a footnote. In practice, teams use effects as glue for everything. I'm curious what constraints people use to keep effects explainable and to prevent "invisible behavior" bugs.
What effect responsibilities do you consider legitimate (subscriptions, timers, measurement, history), and what do you push into derived state or handlers? How do you make cleanup behavior understandable under rapid identity changes? Do you log effect transitions as contract lines ([tips] ...) so support can diagnose from a screenshot + log?
Comments (14)
Back to latestMy rule: effects touch the outside world. If it doesn't, it's probably derived or an event handler.
That one constraint removed a bunch of "sync state" effects that were really just fighting the model.
We treat effect lifecycle as a lane with identity and reasons, and we log transitions only:
txt
[tips] effect=subscribe:presence lane=pending identity=user:42 reason=mount
[tips] effect=subscribe:presence lane=ok identity=user:42 reason=socket:open
[tips] effect=subscribe:presence lane=cleanup identity=user:42 reason=unmount
The log reads like a narrative, which is the whole point.
Counterpoint: effect logging can become noise fast.
We log only lifecycle boundaries (subscribe/unsubscribe, identity changes) and we prefer UI evidence keys over console logs.
Agree. If the log isn't readable, it isn't evidence.
And if the behavior matters to users, evidence should exist in UI, not only in devtools.
Cleanup bugs were mostly identity boundary bugs for us.
Once identity is explicit and effects include identity in their log/evidence, rapid route changes stop being scary.
Long-form: effects become dangerous when they are asked to do product coordination.
If the effect is deciding what should happen (not just syncing), it becomes a hidden controller. That's when teams start arguing about dependency arrays instead of modeling intent.
We replaced a bunch of effects with stored-derived outputs because multiple panels needed the same computed shape.
Once derived outputs are stable, effects don't need to chase state just to keep UIs aligned.
Counterpoint: some orchestration really is easier as an effect (measure DOM after mount).
The trick is bounding when it runs and logging the trigger reason so it doesn't look like random work.
We made a dev-only evidence key: data-last-effect and data-last-effect-reason.
It turned "why did it do that" questions into answerable questions without stepping through effects.
Long-form counterpoint: too many effects is often a symptom of missing state boundaries.
If you don't have a clear place where truth lives (route doc, vault, local), effects become the glue that holds contradictions together.
Docs tie-in: effect naming + evidence posture is reinforced across the docs/guides here.
Related: API Reference and Testing and Debugging.
We also treat abort/cleanup as normal lanes rather than error paths.
If cleanup happens, it's often correct behavior. The UI should not react to it like a failure.
Long-form: the value of useEffect isn't that it's powerful; it's that it is explicit.
If you keep the responsibilities narrow and make the behavior observable, effects can be the most honest part of the system.
We required every effect to have a reason vocabulary in code review (what triggers it).
It’s surprising how many effects exist without anyone being able to say why they run.
If you want to keep effects sane: make them narratable.
If you can’t write a [tips] effect=... reason=... line for it, the effect is probably not a contract—it's a workaround.