UIGuides

How to Do Card Sorting

5 min read

Learn how to run card sorting sessions to improve your site's IA and navigation. Covers open vs closed sorting, participant counts, and analyzing results.

Card sorting is one of the most underused research methods in product design. It takes about two hours to set up, gives you concrete data about how users think about your content, and can save you from building a navigation structure that nobody understands.

Here's how to do it properly.

What card sorting actually is

You give users a set of cards, each labeled with a piece of content or feature. You ask them to sort those cards into groups that make sense to them. That's it.

The insight comes from the patterns. When 15 users independently put "Billing" and "Account Settings" in the same group, that's a signal. When half put "Help Center" under "Resources" and half put it under "Support," you have a decision to make.

Card sorting tells you how users mentally model your content — which is exactly what you need when designing navigation, information architecture, or any feature that organizes multiple items.

Open vs closed sorting

Open sorting: Users create their own groups and name them. Use this when you're designing from scratch or questioning your existing structure. You get richer qualitative data but messier results to analyze.

Closed sorting: You give users pre-defined categories and ask them to place cards into them. Use this when you already have a navigation structure and want to test whether it makes sense. Results are easier to analyze but you won't discover unexpected groupings.

Hybrid sorting: Start with open sorting, then ask users to label their groups. Gives you the best of both approaches if you have the time.

For most navigation redesigns, start with open sorting. For validating a structure you've already designed, use closed.

How many participants you need

15 to 20 participants is enough to see meaningful patterns. Beyond 20, you get diminishing returns — the patterns have usually stabilized by then.

For remote unmoderated sessions (the most common approach), aim for at least 15 completed sorts. Remote moderated sessions are better for complex sorting tasks where you want to ask "why" questions, but they take much longer to schedule and run.

Don't over-recruit. Card sorting is one of those methods where quality of card labels matters more than sample size.

Writing good cards

Your cards should represent real content, features, or pages — not vague concepts. "Analytics Dashboard" is better than "Data." "Invoice History" is better than "Billing."

Keep it to 30-50 cards per session. More than that and participants get tired, which skews your results. If you have more content to test, run multiple sessions with different card sets.

Use plain language. If your card labels require domain knowledge to understand, you're testing vocabulary comprehension, not mental models.

Running a session

In-person: Use physical index cards. Write one item per card in large, legible text. Give participants a flat surface, watch silently, take notes on any hesitations or comments, and debrief afterward with a few open questions.

Remote (unmoderated): Maze supports online card sorting directly. You set up the study, share a link, and participants complete it on their own time. You get automatic similarity matrices and dendrograms without manual analysis.

Remote (moderated): Run it over video call. Miro has a card sorting template that works well, or you can use sticky notes in FigJam. Screen-share so you can watch in real time and ask follow-up questions.

For most teams, remote unmoderated via Maze is the right default. It's faster to run, cheaper, and gives you more participants in less time.

Run card sorting in Maze

Analyzing results

After collecting sorts, you have two main analysis tools:

Similarity matrix: Shows how often each pair of cards was placed in the same group. High similarity (80%+) means users consistently see those items as related. Low similarity means ambiguity.

Dendrogram: A tree diagram that clusters cards based on sorting patterns. It visualizes which cards are most frequently grouped together and at what level of agreement.

Maze generates both automatically. If you ran in-person sessions, you'll need to enter data into a spreadsheet or use a tool like OptimalSort for analysis.

Look for:

  • Cards that were grouped together by 70%+ of participants — those belong together
  • Cards that were split almost evenly — those need clearer labeling or placement decisions
  • Surprising groupings — users put two things together you'd never have combined

What to do with the data

Card sorting data gives you input, not answers. You still have to make decisions.

Document your findings in Notion — what patterns emerged, what was ambiguous, what surprised you. Create a proposed navigation structure based on the dominant groupings. Then validate it with a tree test (the inverse of card sorting — you show the structure and ask users to find things in it).

Don't skip that validation step. Card sorting tells you how users organize content mentally. Tree testing tells you whether your navigation structure actually helps them find things.

Build your Figma wireframes for the navigation after you've validated the structure, not before.

Design your navigation in Figma

Common mistakes

Too many cards. 60+ cards overwhelms participants and produces garbage data. Cut ruthlessly.

Vague card labels. If you have to explain what a card means, rewrite it.

Treating results as absolute. A majority grouping pattern is a strong signal, not a mandate. Use judgment.

Skipping the debrief. The most useful insights often come from asking "Was anything hard to place?" at the end of a session. Don't skip it.

Card sorting is fast, cheap, and gives you real data. There's no good reason to design navigation without it.