The seven steps in a successful heuristic evaluation are:
- Select evaluators
- Prepare heuristic evaluation
- Present the interactive system to be evaluated to the evaluators
- Evaluate the system in solitude
- Build consensus
- Present findings and discuss them with stakeholders
- Write heuristic evaluation report
The process for doing a usability inspection is quite similar to the process for doing a heuristic evaluation, except that a usability inspection does not have to be driven by a set of heuristics.
1. Select evaluators
Evaluators can be user experience professionals or people with knowledge of the subject matter, for example users and other stakeholders. Two to four evaluators provide the optimal cost/benefit but one evaluator will do (if there is just one evaluator, step 5, Build consensus, is not relevant.)
Also select the set of heuristics that will be used for the heuristic evaluation. Often, Nielsen’s 10 heuristics are used, but there are alternatives. Do not use home-made heuristics; use a set of heuristics that is generally recognized and has stood the test of time.
2. Prepare heuristic evaluation
Determine the goal of the heuristic evaluation in cooperation with stakeholders. Examples of goals are: “Is there any friction when an ordinary citizen books a flight?”, “Is the help system usable?”, “Will a journalist be able to find the information they need on the media center page?”
Write a brief evaluation plan, which contains: The goal of the evaluation; who will be involved; the set of heuristics chosen for the evaluation; how the results will be reported; time plan; and resources required .
Ask stakeholders to review and approve the evaluation plan.
3. Present the interactive system to be evaluated to the evaluators
The person who is responsible for the interactive system, for example a developer of an interactive system or an author of a help system, presents the interactive system to the evaluators and answers questions about the interactive system.
4. Evaluate the system in solitude
Each evaluator evaluates the interactive system in solitude based solely on the heuristics.
The heuristic evaluation evaluation should be driven by the heuristics. Start by getting an overview of the interactive system; then go through the heuristics one by one and consider if the heuristic is followed for each key workflow and each important page.
Evaluators should report only usability findings that can be clearly related to a heuristic. If evaluators report usability findings that cannot be clearly related to a heuristic, the heuristic evaluation turns into a usability inspection, which can, of course, also be valuable.
Remember to report positive findings, that is, things that work well. Otherwise, they could be accidentally removed because no one told the developers or the author that they liked them.
Each evaluator writes down their usability findings.
5. Build consensus
The evaluators meet and try to reach consensus on highlights and lowlights. Stakeholders do not participate in this meeting.
This consensus building is of particular importance. It weeds out any usability findings that are peculiar to a specific evaluator. Only usability findings that are supported by a majority of the evaluators are reported to the stakeholders. Evaluators must be ready to accept findings that they have not found themselves.
6. Present findings and discuss them with stakeholders
The evaluators present the usability findings on which a majority agree to interested stakeholders.
The stakeholders and the evaluators discuss the findings.
Any major disagreements that remain after the discussion can be resolved through usability testing.
7. Write heuristic evaluation report
One of the evaluators writes the heuristic evaluation report, whose format and content is similar to the format of a usability test report.