A heuristic evaluation is a usability evaluation method where one or more evaluators compare an interactive system to a previously agreed list of heuristics and identify where the interactive system does not follow those heuristics.
Overview of this page
- Related pages on this website
- Frequent mistakes in heuristic evaluation
- Arguments for heuristic evaluation
- Arguments against heuristic evaluation
- Myths about heuristic evaluation
Related pages on this website
- How to do a heuristic evaluation
- About heuristics
- Heuristic evaluation versus usability inspection
- Molich and Nielsen’s heuristics (1990)
- Nielsen’s heuristics (1994)
- ISO’s dialogue principles (2019)
Usability inspection: A usability evaluation based on the judgment of one or more evaluators who examine or use an interactive system to identify potential usability problems and deviations from established dialogue principles, heuristics, user interface guidelines and user requirements.
Heuristic evaluation: A usability inspection in which one or more evaluators compare an interactive system to a previously agreed list of heuristics and identify where the interactive system does not follow those heuristics.
Heuristic: A generally recognized rule of thumb that helps to achieve usability.
Evaluator: A user experience professionals or a person with knowledge of the subject matter. Usability inspections and heuristic evaluations can be carried out by people with little usability knowledge, for example users who are knowledgeable about the subject matter (“subject matter experts”).
Frequent mistakes in heuristic evaluation
1. The evaluation is based on gut feeling rather than heuristics
Usability findings are assigned to one or more heuristics after the usability findings have been found. The correct approach is to let the heuristics drive the heuristic evaluation.
2. Evaluators do not fully understand the heuristics.
Many heuristics look deceptively simple even though they are compact and some experience is required to interpret them correctly.
3. The evaluation is not based solely on the chosen heuristics
Some user experience professionals who claim that they do heuristic evaluations actually do usability inspections, because they report usability findings that could not possibly be found using the heuristics. For example, error messages that are not noticed by users are not covered by Nielsen’s heuristics.
4. Home-made rather than generally recognized heuristics are used
The evaluators use heuristics made up by themselves instead of generally recognized heuristics. In other words: The judge (the evaluator) writes the law (the heuristics).
Arguments for heuristic evaluation
1. Fast and cost-effective
A heuristic evaluation can be completed quickly, often within one or two days. There is no need to recruit and schedule test participants.
2. Relies on proven knowledge in usability
The use of heuristics that have stood the test of time, for example Nielsen’s heuristics and ISO’s dialogue principles, ensure that usability findings are reliable.
Arguments against heuristic evaluation
1. Limited number of heuristics
Heuristic evaluation requires the evaluators to make judgments by comparing an interactive system to a limited set of heuristics. Usability is much too complex to be expressed in just 10 or even 50 heuristics.
2. Overlooks problems
Heuristic evaluation overlooks problems that are not covered by the applied set of heuristics.
Myths about heuristic evaluation
Over the years I have heard a number of incorrect statements about heuristic evaluation:
Incorrect: If users have a problem that can’t be attributed to one of Nielsen’s ten heuristics, then it isn’t a usability problem after all
Correct: Only 10-30% of all usability problems can be easily attributed to one or more heuristics. Nielsen’s heuristics are great for preventing and finding well-known usability problems. An example of an important heuristic that is missing in Nielsen’s set of heuristics is, “Suitability for the users’ tasks.”
2. Scientific research
Incorrect: A lot of highly scientific research went into creating Nielsen’s and my lists of heuristics.
Correct: Nielsen and I did some studies, mostly on small systems.