[Guide] Build a File Uploader (Client Input as Source) - implementation notes
The File Uploader guide is unapologetic about trusting client input flows: store tokens in a client store, treat client-selected file metadata as a contract, and use route redirects as the orchestration spine. It's a coherent workflow, but it raises a lot of questions about how you keep this safe and debuggable in production.
How did you design the token store so it stays consistent across navigation and refreshes? What derived upload summaries did you store (progress, last error, last intent), and what evidence did you render for them? How do you handle "client says it's done" vs "server says it's done" without confusing users? What role did route redirects play in your flow (upload -> details -> result), and how did you log them?
Comments (18)
Back to latestToken store was the make-or-break part.
We treat it like a vault contract with a version and we render the version as evidence in debug mode.
We modeled upload state as lanes and logged transitions. It made triage so much easier:
txt
[tips] upload id=f_19 lane=queued reason=user:select size=12.4MB
[tips] upload id=f_19 lane=uploading progress=0.12 token=tkn_7
[tips] upload id=f_19 lane=ok serverId=up_883 freshnessAt=2023-02-08T08:08Z
Then the UI renders data-upload-lane, data-upload-progress, and data-upload-token-v so screenshots tell the story.
Counterpoint: "trust client input" can be interpreted as "skip validation" and that's dangerous.
We still validate, we just validate *after* committing the client flow and we surface the lane honestly.
Exactly. Trusting the flow doesn't mean trusting the bytes.
The contract is: accept the intent, then render validation results as lanes.
We stored derived upload summaries because multiple screens need them (list, detail, result).
If each screen computes it, they disagree and users think uploads are missing.
Client-done vs server-done: we separated clientComplete from serverComplete and made the UI show both.
When they're different, it's not a bug—it's a state and it needs a lane.
Redirects as orchestration is underrated. We log redirect reasons so they aren't mysterious:
txt
[tips] redirect from=/upload to=/upload/f_19 reason=select:file
[tips] redirect from=/upload/f_19 to=/upload/f_19/result reason=lane:okWe also render a compact "upload story" string in the detail page (for support):
txt
uploadStory="selected -> uploading -> ok (serverId up_883)"
It sounds silly, but it prevents long arguments about what happened.
Counterpoint: client token stores + refresh is a trap. If the user reloads mid-upload, you can end up in limbo.
We added a recovery lane (recovering) and a manual "resume" intent button.
Recovery lane is the right move. Reload isn't an edge case—it's user behavior.
As long as it's explicit and logged, it feels like a feature.
We had to cap how much client metadata we treat as contract (name/type/size, not everything).
If the client provides a thousand fields, you don't want them all in your model forever.
Docs tie-in: this approach lines up with the repo's security posture (even if it's opinionated).
Related: Security and Safety and Routing and Navigation.
We used a derived uploadFingerprint so we can detect when the client changed the file selection.
Without it, resuming becomes ambiguous.
A small UX win: show the current lane + reason in plain text next to progress.
It prevents people from refreshing because they think it's stuck.
Counterpoint: storing tokens client-side can create weird security narratives with some teams.
We framed it as a route-local capability and rotated tokens aggressively.
If you want this to be debuggable, make sure errors are stored and versioned, not ephemeral.
Otherwise users will report "it failed" with no evidence for what failed.
We also logged the exact redirect chain in one line so it shows up in monitoring:
txt
[tips] redirectChain=/upload -> /upload/f_19 -> /upload/f_19/resultThe guide's main contribution is treating uploads as a state machine with observable lanes.
Once it's a state machine, it's not scary anymore.
I like that it embraces redirects as the orchestration spine.
A lot of upload flows become simpler when you stop trying to keep everything on one screen.
If you implement this, start by deciding what evidence keys you want in every screenshot.
That decision will force you into a coherent model fast.