Maze Review 2026: Prototype Testing That Gives You Real Numbers
Maze connects to Figma prototypes, defines tasks, and collects task completion rates, time-on-task, and click maps. Free plan (1 study/month). Starter at $99/month.
Prototype testing has a credibility problem in design teams. You show a Figma prototype to three colleagues, call it user testing, and present the findings in a meeting. Maze doesn't let you get away with that. It requires you to define tasks, recruit real participants, and measure what they actually do — task completion rates, time-on-task, where they click.
That rigor is Maze's value. It turns prototype testing into something measurable.
How Maze works
You connect your Figma prototype to Maze, define the tasks you want to test ("Find and purchase the blue jacket in size medium"), set up your success criteria, and configure the flow. Then you share a link with participants and collect responses.
Maze records whether each participant completed the task successfully, how long they took, and where they clicked at each step. You get a click map showing where participants tapped on each screen — including misclicks, which reveal assumptions baked into your design that users don't share.
The task completion rate is the headline metric: what percentage of participants finished the task without getting stuck or giving up. Combined with time-on-task, you get a quantitative measure of how usable your design is. That number means something in a stakeholder meeting in a way that "three people seemed confused" doesn't.
The Figma integration
The integration is straightforward. You import your prototype directly from Figma — Maze reads the frames and interactions. If you update the prototype in Figma, you can re-import without rebuilding your study.
The integration works with Figma's standard prototyping. Complex interactions that require Figma's advanced animation features may not translate perfectly, but basic click-through prototypes work cleanly.
Other study types
Maze isn't only task completion testing. You can run card sorting studies (where participants group items into categories — useful for information architecture research), tree tests (where participants navigate a text-based hierarchy — useful for testing nav structure), and open question surveys alongside usability tasks.
This means you can run a multi-part study: a usability task, followed by a preference question, followed by an open feedback prompt, all in one session. Combined analysis across study types is available in the reporting.
Pricing
This is where the honest part gets complicated. The free plan gives you one published study per month and limited responses. It's enough to run a single test occasionally — barely enough for a team that wants to test regularly.
Starter is $99/month. That's a significant jump.
For teams that run usability tests on every major design, $99/month is probably justifiable. You're getting structured, measurable testing with real participants, and the cost compares favorably to a moderated research session with a recruiter. For teams that run occasional tests or are just starting a research practice, $99/month is hard to swallow.
If your team runs more than one or two studies per month, Starter pays for itself. If you're testing that rarely, the free plan might actually be enough.
Participant recruitment
Maze has a built-in participant panel — Maze Panel — where you can recruit participants directly from the platform. You pay per participant beyond what your plan includes. The panel participants aren't free.
This is an important distinction. The $99/month Starter plan covers the study infrastructure. Recruiting participants through Maze's panel is a separate cost. If you have your own pool of participants — existing users, email subscribers, social followers — you can use your own links and avoid that cost.
For teams without an existing participant pool, factor in recruitment costs when evaluating Maze. The total cost per study depends heavily on how many participants you recruit and whether you use Maze Panel or your own audience.
What makes a good Maze study
The quality of the data depends entirely on how well you design the study. Vague task descriptions produce meaningless results. "Explore the checkout process" is not a task — it's an invitation to wander. "You have three items in your cart and want to complete your purchase" is a task.
Maze produces numbers, but the numbers only mean something if the study is well-designed. Teams new to usability testing can get false confidence from Maze data if they don't think carefully about task design, participant selection, and what the success criteria actually measure.
Hotjar vs. Maze
These tools aren't competitors — they're complementary. Hotjar tells you what users do on your live product. Maze tells you whether users can complete tasks in your prototype before you ship.
Use Maze before you build, to validate designs. Use Hotjar after you ship, to understand actual behavior. If your question is "will this work?" use Maze. If your question is "did this work?" use Hotjar.
What's good
What's not
The verdict
Maze earns a 7.5/10. The core capability — structured, quantitative prototype testing — is well executed and genuinely useful. The $99/month price is the main barrier. Teams that run regular usability tests will find it worth the cost. Teams that test occasionally will struggle to justify it.
If you're building a design research practice and want to test prototypes systematically, Maze is the right infrastructure. If you're just getting started, the free plan gives you enough to see whether the workflow fits how your team operates.
Try Maze FreeRelated
Maze vs UserTesting: Which User Research Tool Should You Use?
Maze is built for designers and PMs running quick unmoderated tests. UserTesting is enterprise-grade. Here's how to pick the right one for your team size and budget.
Hotjar vs Maze: Different Tools, Different Questions
Hotjar shows how real users behave on your live site. Maze tests your prototypes before launch. Most teams need Hotjar more — here's why.
Best Tools for User Research in 2026
The best user research tools ranked — from live-site behavior analytics to prototype testing to moderated sessions. Exact pricing and when to use each.