You’ve diagnosed a drop-off. You know which step users abandon and you have a theory about why. Now comes the common mistake: defaulting to whatever flow type you’ve used before instead of picking the one that actually fixes the specific problem.
Most product teams default to guided tours (the most-visible flow type) for every onboarding problem. This is why half of onboarding tests produce no measurable lift — the flow type doesn’t match the drop-off pattern. A tour won’t fix an empty-state problem. A checklist won’t stop a rage-click loop. A tooltip won’t reduce paywall friction.
Here’s the decision tree: diagnose the drop-off pattern, then pick the flow type that actually addresses it.
The four primary drop-off patterns
Before picking a flow type, classify the drop-off. Most onboarding drop-offs fall into one of four patterns:
- Empty state. User lands on a dashboard / workspace / feature with no data and doesn’t know what to do. They look at a blank screen and leave. Common on day-1 signups for productivity tools and analytics products.
- Missed action. User sees the right surface but doesn’t notice the single next-step element they need to click. The button is there — just poorly positioned, poorly labeled, or competing with other visual noise.
- Scope confusion. User sees multiple features and doesn’t know which one to engage with first. Not missing the action — confused about priority. Common on complex products with many surface areas.
- Friction / blocker. User hits an element that rage-clicks, times out, or surfaces an unexpected paywall. They knew what they wanted to do; something is actively stopping them.
Each pattern has a different winning flow type.
Empty state → Checklist (not Tooltip)
When users don’t know what to do, they need a list of tasks, not a single pointer. A tooltip anchored to one button doesn’t help when the user needs to orient themselves to 4 possible actions. A checklist spells out the sequence of steps: “1. Create your first project. 2. Add a task. 3. Invite a teammate. 4. Explore the dashboard.” Checkboxes tick off automatically as steps complete.
Checklists work here because they replace the empty dashboard’s ambiguity with concrete next-action clarity. Products with well-designed setup checklists consistently show 15–30% higher activation than products relying on passive tooltips alone.
Watch out: checklists work for 3–7 items. Under 3, a single tooltip or CTA is cleaner. Above 7, it becomes daunting — users see the list and assume setup is too complex.
Missed action → Tooltip (not Tour)
When users are in the right place but missing the one button they need to click next, use a tooltip anchored to that specific element. Small popup, one sentence, one arrow. “Click here to create your first form.”
Tooltips work here because they appear where the friction is — next to the element the user’s eye should have landed on. Lower overhead than a tour (which forces the user through multiple steps when they only needed one hint). Less disruptive than a modal (which interrupts the flow entirely).
Watch out: tooltips need a stable element to anchor to. If your HTML structure changes frequently or the target element’s selector is brittle, the tooltip will stop appearing. Monitor selector staleness — a tooltip that’s silently broken is worse than no tooltip.
Scope confusion → Guided Tour (not Modal)
When users see multiple features and don’t know which to engage with first, a guided tour — 3–5 sequential steps walking them through the primary surface — orients them without requiring them to figure it out themselves. Each step anchors to a different element; the user progresses via “Next” buttons.
Tours work here because they answer the “what matters first?” question explicitly. Products with significant surface area (CRMs, analytics platforms, design tools) benefit most from tours. Simpler products generally don’t need them.
Watch out: tours should be 3–5 steps. Longer tours feel condescending and get dismissed. Also: tours work best at first arrival on a new surface. A tour that fires repeatedly on return visits trains users to dismiss it reflexively.
Friction / blocker → Modal or Banner (not Tooltip)
When users are actively blocked — a paywall, a permission error, a missing prerequisite — a tooltip doesn’t help. The user isn’t missing information; they’re missing access. This requires either a modal (for urgent blockers that need immediate acknowledgment) or a banner (for persistent blockers that users will resolve later).
Modals work for urgent blockers: “Upgrade to continue — your free tier allows 3 projects, you have 3.” Full-screen, clear action, one click to resolve or dismiss. Banners work for non-urgent blockers: “Connect your data source to see full analytics” persisting at the top of the page until resolved.
Watch out: modals are interruptive. Using them for anything not genuinely blocking trains users to dismiss every modal instinctively. Reserve modals for real blockers. For nudges, use banners or tooltips.
The common mistakes
Three patterns to avoid:
- Tours for everything. The most common mistake. A 6-step tour on first login shows everything, orients the user about nothing, and trains them to dismiss future tours. Use tours for scope confusion specifically; use tooltips for most other cases.
- Modals for nudges. Modals are for blockers. Using them for feature announcements, reminders, or soft guidance creates modal fatigue — users learn to dismiss-and-keep-going. A banner or tooltip does the same job without training dismissal.
- Tooltips with no anchor stability. Brittle element selectors (based on auto-generated class names or DOM position) produce tooltips that silently stop appearing when the UI refactors. Use stable anchors:
data-test-idattributes, stable IDs, or descriptive semantic selectors. Monitor for staleness.
The full decision tree in one sentence
Start with the drop-off pattern:
- Users don’t know what to do (empty state) → checklist
- Users miss a specific action (right place, wrong attention) → tooltip
- Users don’t know which feature to try first (scope confusion) → guided tour
- Users are actively blocked (paywall, error, missing prerequisite) → modal (urgent) or banner (persistent)
Match the flow type to the problem it’s built to solve. The right match produces 2–5x higher lift than the wrong one — which is why teams that diagnose before deploying consistently outperform teams that default to tours.
AI drop-off diagnosis classifies which pattern your specific drop-off fits and recommends the flow type accordingly. No-code flow builder ships all four flow types from the same editor.
Onboardics measures your activation rate automatically and uses AI to diagnose what's blocking it.
Try the interactive demo →Frequently asked questions
What's the difference between a tooltip, guided tour, modal, and checklist?
Tooltip: small popup anchored to a single element (best for missed-action drop-off). Guided tour: 3-5 step sequential walkthrough with Next buttons (best for scope confusion). Modal: full-screen overlay requiring acknowledgment (best for urgent blockers). Checklist: persistent UI showing 3-7 setup tasks that tick off as completed (best for empty-state drop-off). Pick based on what drop-off pattern you’re fixing.
Why do tours underperform tooltips in most tests?
Tours default to showing too many things. When the user actually only needed one hint to unblock, a 6-step tour is overkill — they dismiss after step 2 and still don’t act. Tours work for scope confusion (multiple surfaces, unclear priority) but not for single-action drop-off. Most onboarding drop-offs are single-action, which is why tooltips win more A/B tests than tours.
Should I ever use modals for onboarding?
Sparingly. Modals are best for genuinely blocking situations — paywalls, required terms acceptance, error states. Using modals for nudges, feature announcements, or soft guidance creates "modal fatigue" where users reflexively dismiss every modal without reading. Reserve modals for real blockers; use banners or tooltips for everything else.