[Guide] Build a Survey App (Submission on Blur as Default) - implementation notes
The Survey App guide takes "submission on blur" seriously: every field submits on blur for tight feedback loops, async validation is always in play, and completion is a derived contract rather than a vibe. I'm interested in how people implemented this without making the UI feel like it has surprises or phantom submissions.
How did you design submit gates so blur-submission doesn't spam the server or confuse users? Did you store derived completion per question/section, and what evidence did you render for it? How did you handle async validation results arriving late (version lanes, severity, retry posture)? What was your strategy for drafts versus committed answers (multi-truth, route documents)?
Comments (24)
Back to latestBlur submission is only scary when the app hides what it's doing.
We render a tiny per-field lane marker (saving/ok/error) so blur feels intentional.
Submit gates were the whole game for us. We logged them like contract decisions:
txt
[tips] submitGate field=q12 allowed=false reason=unchanged
[tips] submitGate field=q12 allowed=true reason=blur
[tips] submit lane=pending v=19 reason=submit:blur
And we rendered data-submit-lane + data-submit-v on the question shell so screenshots explain it.
Counterpoint: submitting on blur can be hostile on mobile where blur happens accidentally.
We added a posture switch for mobile: blur submits only after an explicit "done" action.
Totally reasonable. The key is that the posture is explicit and logged, not that blur is morally correct.
We also render the posture (data-submit-posture=blur|confirm) so it's debuggable.
Derived completion per section made navigation sane.
We store completionRatio and missingRequiredCount so the progress indicator is stable across routes.
We kept drafts local and committed answers only on blur.
If you commit on keystroke, you end up with partial junk answers that break analytics and UI.
Async validation without cancellation forced us to version the answer. Late results are fine if they are tagged to the version they checked:
txt
[tips] validate field=q12 v=19 lane=ok
[tips] validate field=q12 v=18 lane=ignored reason=staleVersionWe used useFlowEffect to orchestrate blur submit + next focus, and returned doc strings for the log.
It made the flow feel intentional rather than magical.
Counterpoint: users sometimes *want* to answer a question partially and come back.
We allowed partial commits but flagged them as answerState=draft in stored derived state.
That's a good compromise. "Submit" doesn't have to mean "final" as long as the contract says what it is.
We used answerLane=committed:draft|committed:final as evidence.
Submit gates also prevented double-submits when blur + navigation happen together.
If you don't gate it, you end up with the classic "two answers" bug.
We rendered evidence per question: data-qid, data-submit-lane, data-validated-v, data-required=1.
Support loved it because they could diagnose from one screenshot.
A lot of the confusion goes away if blur submit is paired with immediate acknowledgment.
We show a tiny "saved" chip with the last submit reason and timestamp.
We stored derived completion like a reducer output, not like UI sugar. It means completion is correct-by-construction and can be logged:
txt
[tips] derive completion section=demographics ratio=0.66 missingRequired=1Counterpoint: blur submit makes offline mode harder.
We added a lane queued and treated it as success-ish with visible evidence.
Queued lane is a great idea. It's honest and it matches the "lanes" posture.
If you hide queuing, users think the app lost their answer.
Docs tie-in: blur as default is controversial, but it fits the forms + state posture here: make boundaries explicit.
Related: Forms and Validation (Best Practices) and Core Concepts (Best Practices).
We also had to decide what "blur" means for composite widgets (date pickers, sliders).
We treat the composite as one boundary and submit only when the widget closes.
One trick: keep the submit payload tiny and do derived summaries server-side too.
If the payload is huge, blur submit becomes expensive and scary.
We used a "cooldown gate" so blur submit can't fire twice within 250ms. We log the blocked submits so you can debug it:
txt
[tips] submitGate field=q12 allowed=false reason=cooldown remainingMs=132I was worried about phantom submissions, but the guide's insistence on evidence helped.
If the UI shows why it submitted, it stops feeling haunted.
Counterpoint: blur submit can fight accessibility if focus moves unexpectedly.
We didn't auto-advance focus unless the user opted into "fast mode" posture.
The best part of the approach is that it makes surveys feel responsive: you get feedback as you go.
But you must make the lane visible or the feedback feels random.
We stored derived "completion story" as a string (for debugging) in addition to ratios. It printed like:
txt
[tips] completionStory="missing: q12(required), q14(required)"If you do async validation, don't hide retries.
We render a retry affordance and log when users press it; otherwise you can't debug trust issues.
We also made the progress indicator route-driven so it replays correctly with back/forward.
Otherwise progress feels like a separate app state.
This pattern sounds extreme, but it's consistent with the overall philosophy here: do work on boundaries, then render evidence.
Blur is just one boundary; the important part is the contract.