How to Run a Heuristic Evaluation
Run a heuristic evaluation using Nielsen's 10 principles. Covers each heuristic with UI examples, severity ratings, solo vs team approach, and how to document findings.
A heuristic evaluation is an expert review of a UI against a set of established usability principles. You don't need users — you need knowledge of the principles and a structured approach.
It's not a replacement for usability testing, but it's fast, inexpensive, and catches a lot of real problems. A 2-hour heuristic evaluation can surface 20-30 issues that would take multiple usability sessions to find.
What you need before you start
Define the scope. Are you evaluating the full product or a specific flow? Trying to cover everything in one session leads to shallow coverage. Pick your highest-priority area — the sign-up flow, the main dashboard, the checkout, whatever is most critical to the business.
Choose your evaluators. Solo evaluations find fewer issues than team evaluations. If possible, get 3-5 evaluators who aren't the designer who built the product. Evaluators who built something have blind spots they can't see past.
Give each evaluator an independent session before any group discussion. Group discussion at the start biases individual findings.
Nielsen's 10 heuristics — with UI examples
1. Visibility of system status Users should always know what's happening. Loading states, progress indicators, and success confirmations are all examples of this.
Violation example: A form submits with no loading state, no confirmation, and no visible change. The user clicks again, submitting twice.
2. Match between system and real world Use language and concepts users recognize. Avoid technical or internal terminology.
Violation example: An error that says "Error 422: Unprocessable Entity" instead of "We couldn't save your changes. Check that all required fields are filled out."
3. User control and freedom Provide undo, redo, cancel, and back. Don't trap users.
Violation example: A multi-step modal with no close button and no way to exit except completing the flow.
4. Consistency and standards Use conventions. If your primary action button is blue, make all primary action buttons blue. If "Settings" is in the top right, keep it there.
Violation example: Inconsistent button styles for the same action type across different sections of an app.
5. Error prevention Design to prevent errors before they happen. Confirm destructive actions. Disable buttons when inputs are incomplete.
Violation example: A "Delete account" button with no confirmation dialog.
6. Recognition rather than recall Don't make users remember information from one screen to use on another. Show relevant context inline.
Violation example: A checkout that asks users to enter shipping information they've already saved but doesn't surface the saved address.
7. Flexibility and efficiency of use Provide shortcuts for expert users without hiding them from novices. Keyboard shortcuts, bulk actions, saved templates.
Violation example: An app where experienced users have to click through 4 screens to do a common action with no shortcut available.
8. Aesthetic and minimalist design Remove information that competes with what's important. Every element on screen that isn't necessary increases cognitive load.
Violation example: A dashboard with 12 stat cards, 3 charts, 2 notification panels, and a recent activity feed, none of which are prioritized.
9. Help users recognize, diagnose, and recover from errors Error messages should explain what went wrong and how to fix it — specifically.
Violation example: "Invalid input" on a form field, with no indication of which field failed or what's wrong with it.
10. Help and documentation Make help findable. Even products with excellent UX need it.
Violation example: A help link in the footer that opens a PDF manual last updated three years ago.
Severity ratings
Rate each issue you find on a 0-4 scale:
- 0: Not a usability problem at all
- 1: Cosmetic issue only — fix if time permits
- 2: Minor issue — low priority, fix if easy
- 3: Major usability problem — important to fix, high priority
- 4: Usability catastrophe — must fix before launch
Be honest with your ratings. The temptation is to rate everything 3 or 4. Reserve 4s for issues that will genuinely prevent users from completing critical tasks.
Running it solo vs with a team
Solo: Go through each flow you're evaluating while systematically checking against each of the 10 heuristics. Don't just scroll and react — be deliberate. For each screen, ask yourself: does this violate any of the heuristics? Document issues as you find them.
Two passes works well: one free-form walkthrough to get familiar, then a structured second pass checking each heuristic.
Team: Have each evaluator do an independent session first. Then combine findings. Use a severity vote where everyone rates each issue independently, then discuss significant disagreements. This reduces the influence of any one evaluator's opinion.
A team of 3-5 evaluators finds 60-75% of usability issues. A single evaluator finds 35-40%. The team approach is meaningfully better.
Documenting findings in Figma and Notion
Take annotated screenshots for each issue. Figma is good for this — create a page called "Heuristic Evaluation" in your project file, paste screenshots, and add sticky-note annotations with the heuristic violated, the issue description, and the severity rating.
Compile the full findings list in Notion as a table with columns for: Screen/Flow, Issue Description, Heuristic Violated, Severity, Recommended Fix.
Sort by severity descending. The top of the list becomes your prioritized fix list.
Annotate your evaluation in Figma Document findings in NotionHeuristic evaluation vs usability testing
These are different and complementary.
A heuristic evaluation is fast, cheap, and done by experts. It finds issues with established design principles. It's best for catching clear violations and getting quick input early in a project.
Usability testing involves real users completing real tasks. It finds issues you'd never predict — the ones that emerge from actual behavior, mental models, and real-world context. It takes longer and costs more.
Use heuristic evaluation for quick audits, early-stage reviews, and stakeholder conversations. Use usability testing to validate design decisions with real behavior.
Running one doesn't make the other unnecessary. The strongest design process uses both.
Related
How to Do a UX Audit: A Practical Step-by-Step Process
Learn how to run a UX audit using Nielsen's heuristics, document issues with severity ratings, and present findings to stakeholders using Hotjar, Figma, and Notion.
How to Do Usability Testing
A practical usability testing guide — moderated vs unmoderated, writing task scenarios, recruiting participants, and using Maze and Hotjar.
What Is UX Design? A Plain-English Explanation
What UX design actually means, the UX process from research to testing, how UX relates to UI, what UX designers do day-to-day, and how to get started.