|DialogDesign ved Rolf Molich||English Dansk|
|DialogDesign, Skovkrogen 3, 3660 StenlÝse, Denmark, firstname.lastname@example.org, +45 4717 1731|
CUE - Comparative Usability Evaluation
CUE stands for Comparative Usability Evaluation.
In each CUE study, a considerable number of professional usability teams independently and simultaneously evaluate the same website, web application, or Windows program.
The main purpose is to collect data on a series of questions:
∑ What's common practice?
What usability evaluation methods and techniques do professionals actually use? Do experienced professions ever shun methods or techniques, even those that receive a lot of coverage?
∑ Are usability evaluation results reproducible?
∑ What is a "serious" or "critical" usability problem?
∑ How many usability problems are there?
Whatís the order of magnitude of the total number of usability problems that you can expect to find on a typical, nontrivial website?
∑ How many test participants are needed?
How many test participants are required to find most of the critical problems?
∑ Quality differences.
Are there important quality differences between the results the teams obtained?
∑ Whatís the return on investment?
If you invest more time in a usability evaluation - for example, 100 hours instead of 25 - will you get substantially better results?
∑ Usability test versus Expert review.
How do professional usability testing and expert reviews compare?
DialogDesign and other usability experts use the results from these questions to advise the usability community on quality approaches to usability evaluation.
DialogDesign conceived and managed the CUE studies described in more detail below.
The Nine CUE Studies
CUE-1 - Four teams usability tested the same Windows program, Task Timer for Windows
CUE-7 - Nine professional teams provided recommendations for six nontrivial usability problems from previous CUE-studies
CUE-9 - A number of experienced usability professionals independently observed five usability test videos, reported their observations and then discussed similarities and differences in their observations (the "Evaluator Effect")
The results are discussed in more detail in the refereed CUE papers.
Questions or comments about CUE? Write to email@example.com
Home Free Resources CUE Products Events Books About DialogDesign