How to Test Your Designs at Every Stage
A practical guide to design testing — from concept sketches to live A/B tests — covering fidelity-appropriate methods, minimum viable testing, and how to analyze results.
Testing your designs is the difference between assuming something works and knowing it does. The good news: you don't need a formal research program or a dedicated UX researcher to do useful testing. You need the right method for your current stage.
Testing by fidelity level
Concept tests (on sketches and rough wireframes)
At the earliest stage, you're testing whether your concept makes sense — not whether the execution is polished. This means showing rough sketches or low-fidelity wireframes to real users and asking open-ended questions.
"What do you think this is for?" "What would you do first?" "What's confusing?" You're not asking them to complete tasks yet. You're checking whether your mental model matches theirs.
You can do this over a video call with your screen shared. No prototype required. Five people is enough to surface major mismatches between your concept and user expectations.
Task tests (on wireframes)
Once you have clickable wireframes, give users specific tasks to complete. "Can you find where you'd add a new team member?" "Show me how you'd change your billing plan."
Watch what they do, not what they say. A user saying "that was easy" while visibly struggling is valuable data. The friction points — where they pause, where they click the wrong thing, where they ask a question — are your findings.
Fidelity matters here only in that the wireframe needs to be interactive enough to support task completion. You don't need colors or final copy.
Usability tests (on high-fidelity prototypes)
At this stage you have a polished Figma prototype and you're testing whether the design works as intended — including visual hierarchy, copy, states, and flows. Give users realistic tasks in context. "You just signed up and you want to set up your first project." Let them navigate the full flow.
Maze is the most practical tool for running unmoderated usability tests on Figma prototypes. You connect your prototype, define the tasks, and send the test to participants. Results come back with completion rates, click maps, and time-on-task data.
Try MazeA/B tests (on live product)
A/B testing is for optimization, not validation. You already have something that works — you're trying to determine whether a specific change improves a specific metric. Two variants, one change, a clear hypothesis, enough traffic to reach statistical significance.
Don't run A/B tests on concepts. By the time you're A/B testing, you should have already validated that the thing works; now you're optimizing how well.
Minimum viable testing
You don't need 20 participants to find usability problems. Nielsen's research established that 5 users catch approximately 85% of usability issues in a given design. Additional participants add diminishing returns.
5 users isn't always enough — for quantitative research, for finding rare edge-case behaviors, or for measuring statistical significance in A/B tests. But for finding usability issues in a prototype, 5 is a solid minimum viable sample.
The implication: you can test more frequently with smaller samples rather than doing large studies infrequently. A quick test with 5 people every two weeks beats a formal study with 15 people every quarter.
Unmoderated vs moderated testing
Unmoderated testing means users complete tasks independently, without a researcher present. Tools like Maze run tests automatically and collect results at scale. Faster, cheaper, works across time zones. The downside: you can't ask follow-up questions, and you miss the nuance of watching someone think through a problem.
Moderated testing means you're present (usually via video call) while the user completes tasks. More time-intensive, but richer. You can probe: "I noticed you hesitated there — what were you thinking?" You can see confusion that wouldn't show up in click data.
Use unmoderated testing for task-based flows where you have a clear success metric. Use moderated testing when you want to understand why something is happening, not just what.
Analyzing results
Quantitative data from Maze (completion rates, click maps, time-on-task) tells you where the problems are. Qualitative data (recorded sessions, interview notes) tells you why.
For each test, look for:
- Tasks with low completion rates — these are usability failures
- Tasks with high completion rates but long time-on-task — these work but are slower than they should be
- Consistent click patterns on the wrong elements — suggests labeling or hierarchy problems
- Common places where users stop and ask questions — these need clearer copy
For live product, Hotjar's heatmaps and session recordings add a layer of real-world behavior data. You'll often see patterns in production that didn't show up in prototype testing — because users behave differently when real data is involved.
Try HotjarThe report nobody reads
Resist the urge to write a long test report. What stakeholders actually need: a one-page summary with the top 3-5 findings, a severity rating for each (critical/major/minor), and a recommended fix. Short, specific, actionable. That's the format that gets read and acted on.
Related
How to Do Usability Testing
A practical usability testing guide — moderated vs unmoderated, writing task scenarios, recruiting participants, and using Maze and Hotjar.
Maze Review 2026: Prototype Testing That Gives You Real Numbers
Maze connects to Figma prototypes, defines tasks, and collects task completion rates, time-on-task, and click maps. Free plan (1 study/month). Starter at $99/month.
Best Tools for User Research in 2026
The best user research tools ranked — from live-site behavior analytics to prototype testing to moderated sessions. Exact pricing and when to use each.