How to Do Unmoderated Usability Testing
A practical guide to running unmoderated remote usability tests. Covers task design, participant recruiting, analyzing results, and picking the right tool.
Unmoderated usability testing means participants complete tasks on their own, without a researcher guiding them in real time. You set up the test, share a link, and collect results asynchronously.
It's faster and cheaper than moderated testing. You can get results from 20+ participants in a day instead of scheduling five 1-hour sessions over two weeks. The tradeoff: you can't ask follow-up questions or redirect participants who get confused. Your test design has to be airtight because nobody is there to clarify.
When unmoderated testing works
Unmoderated testing is best for evaluating specific, well-defined tasks. Can users find the pricing page? Can they complete checkout? Can they create an account? These are observable, measurable actions with clear success criteria.
It's less useful for exploratory research. If you're trying to understand why users feel a certain way about your product, or what their mental model is, you need the back-and-forth of a moderated session.
Use unmoderated testing when you need to validate navigation, test a new flow, compare two design variants, or check whether a redesign introduces usability regressions.
Designing effective tasks
Bad tasks produce bad data. The most common mistake is writing tasks that give away the answer.
Bad task: "Use the Settings menu to change your notification preferences." You just told them where to go.
Good task: "You're getting too many email notifications from this app. Figure out how to reduce them." This tests whether they can find the right place without being told.
Write 3-5 tasks per test. More than that and participants lose focus. Each task should have a clear success state: they either completed it or they didn't.
Include a mix of:
- Navigation tasks. "Find information about pricing for teams." Tests information architecture.
- Action tasks. "Add a product to your cart and begin checkout." Tests flow completion.
- Comprehension tasks. "Based on this page, what plan would you choose for a 10-person team?" Tests content clarity.
Keep task descriptions under two sentences. If you need a paragraph to explain the scenario, the task is too complex.
Recruiting participants
You need 15-20 participants for reliable quantitative data (task success rates, time on task). For qualitative insights (where people get stuck), 5-8 is enough.
Maze and Lyssna both have built-in participant panels. You specify demographics, screen for relevant criteria, and the platform recruits participants for you. This costs $2-5 per response depending on the audience.
You can also recruit from your own user base. Send the test link to a segment of existing users via email. The results will be more relevant since they already know your product, but you lose the fresh-eyes perspective of new users.
Avoid testing with colleagues or friends. They know too much about the product and will navigate differently than real users.
Setting up the test
Maze is the best tool for unmoderated testing of Figma prototypes. You import your Figma prototype directly, define tasks and success screens, and Maze generates click heatmaps, misclick rates, and task completion metrics.
Lyssna is better for testing static designs, first-click tests, and preference tests. It's simpler than Maze but covers a different set of testing scenarios.
Hotjar is useful for testing live websites rather than prototypes. Its session recordings and heatmaps show how real users interact with your production site.
Set up a test in MazeFor prototype tests in Maze, the setup process is: import your Figma prototype, define the starting screen, write the task instruction, mark the success screen, and set the expected path. Maze will track whether users followed the expected path or deviated.
Analyzing results
Focus on these metrics:
Task success rate. The percentage of participants who completed each task. Below 80% signals a usability problem that needs fixing.
Direct success vs. indirect success. Did they get there on the first try, or did they wander? High indirect success (people eventually finding it after wrong turns) suggests your information architecture is confusing even when people ultimately succeed.
Misclick rate. Where did people click that wasn't the right target? Misclick heatmaps reveal UI elements that look clickable but aren't, or CTAs that are too small or poorly labeled.
Time on task. How long did each task take? Compare this across participants. High variance means some people found it immediately while others struggled.
Don't just look at the numbers. Watch 3-5 session recordings to see the actual behavior. The quantitative data tells you what happened. The recordings tell you why.
Recommended tools
Maze for prototype usability testing with Figma integration. Lyssna for first-click tests, preference tests, and design surveys. Hotjar for analyzing real user behavior on live sites.
Run unmoderated tests early and often. A quick 5-task test with 15 participants gives you more actionable data than weeks of internal debate about which design is better.
Related
How to Conduct User Interviews
A practical guide to planning and running user interviews that generate real insights. Covers discussion guides, active listening, note-taking, and synthesis.
Maze Review 2026: Prototype Testing That Gives You Real Numbers
Maze connects to Figma prototypes, defines tasks, and collects task completion rates, time-on-task, and click maps. Free plan (1 study/month). Starter at $99/month.