[Guide] Build a Whiteboard (Hidden Effects) - implementation notes
The Whiteboard guide argues that "hidden effects" can keep the route shell calm: you keep the shell declarative, but you still have orchestration happening in named, bounded effect surfaces. Pair that with a render proxy and route documents, then store a derived selection contract so the UI stays coherent across panels and tools.
Did hidden effects actually make the shell cleaner, or did it just move complexity into harder-to-see places? How did you structure the render proxy so it stayed predictable and testable? What did you store as derived selection (ids, bounding boxes, summary strings), and what did you render as evidence? How did you keep tool state and selection state from drifting under rapid pointer moves?
Comments (22)
Back to latestHidden effects only worked for us because we treated them as first-class contracts, not as "secret code".
They had names, lanes, and reasons, and we rendered the last transition as evidence in debug mode.
The key is: hidden effect doesn't mean invisible. It means "not in the shell tree". We logged hidden effects like this (again, the shape matters):
txt
[tips] hiddenEffect=selection:sync lane=pending reason=pointer:move selection=ann_42
[tips] hiddenEffect=selection:sync lane=ok reason=derive:selection summary="1 selected"
Then we render data-selection-summary="1 selected" so screenshots match the log.
Counterpoint: hidden effects can make debugging harder because the code isn't near the UI that it affects.
We had to be strict about scopes: each hidden effect owns one domain (selection, tool, viewport), not everything.
Yes. The moment a hidden effect becomes a coordinator of coordinators, you're back to invisible architecture.
Scopes + names + evidence keep it from turning into a blob.
Render proxy helped because it forced us to treat rendering as a pure-ish projection of documents.
Once the proxy exists, tools and panels stop reading random mutable objects directly.
We stored derived selection as (1) list of ids, (2) bounding box, (3) a compact summary string.
The summary string was surprisingly important because it made support/debugging human-readable.
The drift under pointer moves is real. Our fix was to treat pointer events as inputs, not as state.
Pointer updates write into a route document, then a single derive pass produces selection + evidence.
We implemented a "selection derive gate" so selection doesn't churn on every pixel, but the contract stays explicit. And we logged gate decisions because otherwise it feels laggy and mysterious:
txt
[tips] gate=deriveSelection allowed=false reason=debounce remainingMs=72
[tips] gate=deriveSelection allowed=true reason=debounce:fireCounterpoint: storing derived selection can get stale if you allow multiple mutation paths.
We had to enforce a single write helper that always triggers derive after any selection mutation.
Long-form counterpoint: hidden effects can become an excuse to hide imperative code.
If your architecture relies on hidden effects to keep the shell clean, but you never render evidence, you're trading readability for aesthetics.
We required every hidden effect to produce either a log line or a UI evidence key (preferably both).
Docs tie-in: the patterns + effects docs make the case for naming effects as APIs.
Related: Patterns Library and API Reference.
Render proxy became testable once we treated it like a compiler: doc inputs -> render outputs.
We wrote small tests that assert the proxy output shape, not canvas pixels.
The guide's strongest point is that selection is a product contract, not just UI state.
If selection isn't stable and visible, everything else feels broken (toolbars, side panels, keyboard shortcuts).
We used a hidden effect for viewport sync (pan/zoom) but we made it produce a doc string so it was reviewable:
txt
[tips] hiddenEffect=viewport:sync reason=gesture:pinch scale=1.25 panX=120 panY=48Tool state drifted less once we made tool transitions explicit intent objects ({ type: 'tool:set', tool: 'draw' }).
Then selection updates could reason about tool intent instead of guessing from event ordering.
We rendered evidence for: tool, selectionCount, viewport, and the last hidden effect name.
It made bug reports dramatically better because you can see which part of the system last moved.
Counterpoint: keeping the shell "clean" can hide legitimate UI responsibilities.
If the shell is the only place that can coordinate panels, maybe it's okay for it to coordinate a little (as long as it's explicit).
I think the guide's point is coordination should be named and bounded, not necessarily absent.
Hidden effects are one way; route doc orchestration is another. Both can be explicit.
We also stored derived "selection fingerprint" so we can detect drift between panels.
If the fingerprint differs, we surface an evidence warning rather than silently diverging.
A trick: treat selection derive as idempotent and safe to run repeatedly.
That way you can re-derive after any mutation without worrying about double-applying side effects.
Long-form argument: whiteboards are basically distributed state machines (canvas, side panel, toolbar, history).
Hidden effects can be the glue, but only if you can explain them as state machine transitions.
If your hidden effects don't correspond to transitions you can name, you're building a haunted house.
We added a dev-only "last 10 contract lines" overlay. It made pointer bugs much easier to reproduce.
The overlay was literally the tips log lines shown in UI.
Counterpoint: a lot of people will read "hidden effect" and do side effects during render.
We had to be explicit in code review that hidden effects are still effects—they just live in a different layer.
If you're implementing this, start by writing down your effect names and what evidence each one produces.
That list becomes the architecture map for the whole whiteboard.