UIGuides

How to Test Information Architecture

5 min read

Guide to testing information architecture with card sorting and tree testing. Covers open vs. closed card sorts, analyzing dendrograms, and iterating on navigation.

Information architecture (IA) is how you organize and label content so people can find what they need. Bad IA means users can't find features that exist. Good IA means navigation feels invisible because everything is where you'd expect it.

The problem: designers and product managers are too close to their own product to judge IA objectively. You know where everything is because you built it. Your users don't. That's why you test.

Card sorting: let users organize for you

Card sorting gives participants a set of content items (written on cards) and asks them to group them into categories. It reveals how your users think about your content, which is often very different from how your team thinks about it.

Open card sort. Participants create their own groups and name them. Use this early in the design process when you're figuring out the basic structure. You'll discover natural groupings you hadn't considered.

Closed card sort. You provide the category names, and participants sort cards into those predefined groups. Use this to validate a structure you've already drafted. It answers: "Does our proposed navigation make sense to users?"

Hybrid card sort. Participants sort into predefined categories but can also create new ones if nothing fits. A good middle ground when you're fairly confident in your structure but want to catch gaps.

For any card sort, you need 15-30 participants for reliable patterns. Fewer than 15 and individual quirks dominate the data.

Running a card sort

Optimal Workshop is the gold standard for card sorting. Its OptimalSort tool handles open, closed, and hybrid sorts with built-in analysis tools. Plans start at $107/month.

Miro works for quick, informal card sorts during workshops. Create sticky notes for each item, share the board, and have participants drag them into groups. It's less structured but faster to set up and great for in-person or synchronous sessions.

Run a card sort with Optimal Workshop

To set up your card sort:

  1. Write your cards. List every content item, feature, or page in your product. Use plain language, not internal jargon. "Payment history" not "Transaction ledger module."
  2. Keep it under 60 cards. More than that and participants fatigue. If you have 100+ items, split into multiple sorts by product area.
  3. Randomize the order. Don't present cards in your current navigation order. That biases participants toward your existing structure.
  4. Include a few "tricky" items. Content that could logically belong in multiple places. These are the items that reveal the most about mental models.

Analyzing card sort results

The key output is a similarity matrix or dendrogram. Both visualize how often participants grouped items together.

A dendrogram is a tree diagram. Items that were frequently grouped together appear on nearby branches. Clusters that form at low thresholds (meaning most people agreed) represent strong, natural groupings. Clusters that only form at high thresholds are weak groupings where people disagreed.

Look for:

  • Strong clusters. Items that 70%+ of participants grouped together. These are your navigation categories.
  • Orphan items. Items that don't cluster consistently with anything. These are the things users struggle to categorize, and they'll struggle to find them in your nav too.
  • Split items. Items that roughly half of participants put in one group and half in another. These might need to appear in multiple places (cross-links) or need better labeling.

In Optimal Workshop, the analysis tools generate these visualizations automatically. You can also export the raw data for custom analysis.

Tree testing: validate your structure

After card sorting tells you how to organize content, tree testing validates whether people can find things in that structure.

A tree test presents participants with a text-only version of your navigation hierarchy (no visual design, just labels and levels) and gives them tasks: "Where would you find information about changing your billing plan?"

Participants click through the tree to find the answer. The tool tracks their path, whether they succeeded, and how long it took.

Optimal Workshop's Treejack is the standard tool for tree testing. Maze also supports tree testing within its broader usability testing platform.

Run tree tests with 30-50 participants. You need enough data to calculate meaningful success rates per task.

Interpreting tree test results

The key metrics:

  • Task success rate. What percentage found the correct answer? Below 70% means that part of your IA needs work.
  • Directness. Did they go straight to the right place, or did they backtrack? High directness means your labels are clear. Low directness with eventual success means the structure works but the labels are confusing.
  • First click. Where did participants click first? If most people's first click was in the right section, the top-level IA is sound even if they struggled deeper in the tree.

Iterating on your IA

IA testing is iterative. The process goes: card sort to discover structure, draft the navigation, tree test to validate, revise based on results, tree test again.

Common fixes:

  • Rename categories when directness is low but eventual success is high. The structure is right but the label is wrong.
  • Flatten deep hierarchies. If users consistently fail at the third or fourth level, your structure is too deep. Bring important items closer to the surface.
  • Add cross-links. When items consistently split between two categories, put them in both places rather than forcing a single location.

Recommended tools

Optimal Workshop for professional card sorting and tree testing with built-in analysis. Maze for tree testing integrated with broader usability testing. Miro for quick collaborative card sorts during team workshops.

Test your IA before building the UI. Fixing navigation labels in a tree test takes minutes. Fixing them after the site is designed and built takes weeks.