Early and continual focus on users, empirical measurement of usage, and iterative design are core principles for designing technology systems that are not only easy to learn, remember, and use, but are also useful [1]. Research to date indicates that health information technology (HIT) systems work best when they are user centered, taking the needs, barriers, and preferences of target users into account [2].

HIT systems, including those that display patient-reported outcomes (PRO) data, are increasingly being utilized across clinical specialties. As electronic PRO systems become more widespread, focusing on how clinical teams can use these data visualization systems to support individual patient decisionmaking and communicate changes in clinical status is necessary. Human-centered design (HCD) is one approach to optimizing the design of an interactive PRO system.

Heuristic evaluation is a valuable component of the HCD process that efficiently identifies usability problems that may impede user accessibility and satisfaction with a technology [3]. Prior studies in other domains (e.g., e-commerce) reflect that using either user testing or heuristic evaluation can be useful in identifying usability problem areas. Each approach has been shown to identify problems missed by the other. “Therefore, to obtain a comprehensive identification of usability problems, user testing and heuristic evaluation methods should be used together to complement each other regarding the types of problem identified by them.” [4] Yet, few PRO design studies have incorporated heuristic evaluation into their development methods. A noted exception is work by Turner-Bowker et al [3]. that reports the value of a complementary design process that consists of heuristic evaluation and usability testing engaging potential users.

Effective visualizations are particularly important for PRO systems because they support clinical decision-making and engage patients in the shared decision-making process, within the context of an already busy clinical workflow. These uses and their context mandate easily understandable, accurate, and quickly absorbed visualizations.

Therefore, the purpose of this commentary is (1) to discuss the use and benefits of heuristic evaluation as a complementary component of HCD for PRO systems to augment data collection from users; and (2) to call for additional work toward the development of PRO-specific heuristics to guide the development, implementation, and use of PRO systems.

The commentary draws from the example of the Comparative Effectiveness Research Translation Network (CERTAIN) Hub, developed by a team at the University of Washington. CERTAIN is a network of over 60 health care organizations that participate in over 20 projects, studies, and initiatives. These program activities aim to improve the care of patients at partner organizations in Washington State through the continuous evaluation of health care delivery and generation of new evidence through research. One of the central projects is the CERTAIN Hub, a web-based portal for facilitating and improving patient data collection. The Hub offers a patient portal for completing a baseline survey about current health status, automatic enrollment in follow-up surveys, and a clinician dashboard for reviewing aggregate PRO data. This paper will share information about the development of the clinician dashboard.

Human-Centered Design

Human-centered design is a multistage and iterative problem-solving process that includes ascertaining and analyzing the needs, desires, and limitations of users [5]. The goal of HCD is to connect designers and developers to the users’ context, workflow, and cognitive processes to understand and meet users’ needs. In ideal situations, HCD methods are deployed throughout the technology development life cycle [6], beginning in the analysis and design stages when the users’ needs, preferences, and capabilities, as well as creative possibilities, are explored (Figure 1).

Figure 1 

Human-Centered Design Process

Human-centered design activities are used in several disciplines—information systems, computer science, marketing, manufacturing—to inform the design, development, and selection of technology and products (see for HCD examples) [6, 7]. However, the use of HCD is not consistently applied [8]. Recent calls exist in the popular press and the academic world for visionaries, developers, and decision-makers to extend the sensibilities of a human-centered focus to the design and adaptation process of health technologies [9, 10].

Failing to consider PRO system needs from the perspective of the users’ (patients, providers, and administrators) context, workflow, and cognitive processes may limit potential solutions for improving the functionality and effective use of PRO in clinical decision support, shared decision-making, and quality improvement. It is imperative that PRO systems include the right visualizations and be available at the right time (before and during a patient encounter) and place (in the clinicians’ offices and exam rooms).

Heuristic Evaluation

HCD employs many data collection methods and tools that may be used singularly or in complementary ways. Although users are the central focus in HCD, not all HCD methods involve direct user engagement—heuristic evaluation is one such method. Heuristic evaluation is an important and efficient foundation or complement to other HCD methods [12, 13, 14, 15, 16].

Heuristic evaluation involves an expert examination of the user interface to assess compliance with recognized principles, the heuristics. Heuristic principles can inform the design process or be used to evaluate a draft, prototype, or product [17]. Heuristic evaluations can discover concerns that may not otherwise be identified, especially in areas where designers, developers, and users may be unaware of optimum design practices. Research shows that an expert evaluation can identify many of a system’s usability problems. As additional evaluators are added, the percentage of usability problems that are identified increases [12, 13, 14, 15, 16]. In some studies, heuristic evaluation has been shown to uncover design problems missed by usability testing [4].

Although various heuristic lists exist, such as Nielsen’s 10 usability heuristics [18] for user interface design, few focus on visualizations or provide enough detail to be of help to nonexperts. Principles for the design and evaluation of visualizations of quantitative information are scattered across disciplines such as cognitive psychology data visualization, and human-computer interaction.

For example, the work by William Cleveland and Robert McGill [19] in cognitive psychology gave rise to the idea of graphical perception—how a user’s visual system decodes the information in a graph— and an understanding of what graphical perception tasks are performed with greater accuracy than others. Colin Ware, a data visualization expert, uses his knowledge of visual perception to create guidelines for the use of color, texture, and Gestalt Principles in visualizations and how to use the power of preattentive processing to present information “at a glance.” Ben Shneiderman [20] through his work in human-computer interaction has published a visual design guideline called the Visual Information Seeking Mantra: “Overview first, zoom and filter then details-on-demand.”

Perhaps because excellence in graph design draws from work across disciplines, or perhaps because of an overreliance on graphing software—whose default settings are often unfortunate and sometimes wrong—the construction of medically related graphs is often suboptimal [21].

PRO visualizations, often in the form of tables and graphs, are the core of a functioning PRO system in a clinical context. Misinterpreted, overwhelming, or unused visualizations can severely compromise the usability and usefulness of PRO data in integrating this systematic expression of the patients’ voice into the care process.

To date, little work has been done to draw attention to the value of heuristic evaluation in the development or testing of PRO systems. The potential need for heuristics tailored for the PRO context—particularly heuristics related to PRO visualizations, which may be used by patients, providers, and administrators—has not been explored.

Furthermore, few heuristic lists recognize context of use. As emphasized by Berinato [22, 23], designing visualizations simply to be attractive is not enough: visualizations must be designed for specified, decision-making contexts that reflect the nature and purpose of the visualization.

This fragmentation of guidelines across disciplines and scant attention being paid to context of use create a void in the toolbox for developing PRO systems where the users’ experience with the system depends primarily on the design of appropriate and well-constructed visuals suited to the workflow of a busy clinical setting.

Insights from Using Heuristic Evaluation: CERTAIN Example

The web-based clinician dashboard on the CERTAIN Hub was developed by investigators at the University of Washington and is designed to report PRO pain and disability measures for patients undergoing cervical or lumbar spine surgery This dashboard displays aggregated, de identified PRO data for use by clinicians and administrative staff.

Figure 2 depicts the stakeholder engagement process that was used to develop the first version of the CERTAIN Hub. The process facilitated the understanding of users’ preferences, context, workflow, and general capabilities [24] and produced a system that was generally accepted by users. Based on the users’ preference for simple graphs, the dashboard included column and bar graphs, line graphs, and pie charts as well as some data in tabular form.

Figure 2 

Process for Engaging Stakeholders in the Human-Centered Design (HCD) of PRO Dashboards [24]

Following the implementation of the CERTAIN Hub dashboard, we conducted a heuristic evaluation to identify opportunities to optimize the visual display of PRO data. The primary goal of this evaluation was to reduce errors in interpretation and accommodate rapid comprehension, which is critical for using PRO data for clinical and shared decision-making.

This heuristic evaluation, conducted by an individual with both medical and visual design expertise, was our initial step toward the development of PRO-visualization-specific heuristics. The evaluator drew from heuristic principles related to visual and graphical perception and best practices in graph design as well as years of experience in clinical practice and quality improvement. The use of evaluators who are experts in visual design and understand the analytic intent of the visualizations is important [25].

We provide a complete list of the findings and heuristic recommendations in Appendix A. General categories of visual findings included those related to graphical perception, color, Gestalt Principles, visual efficiency and prioritization, text elements, and graphical convention. Subtle changes, such as those presented in Appendix A, may not be readily explicated by developers or identified by users, but can improve the efficiency of comprehending the message presented by the visualizations.

Comparisons are a key component of PRO systems. For providers and patients, comparisons show change over time or juxtapose a single patient to a cohort. For administrators, comparisons may be made between providers, practices, or national repositories. Selected findings, particularly related to nuances in comparing information and benefitting from explanation, are further discussed below.

Visual Decoding of Quantitative Information

The column graphs showing longitudinal PRO data included data labels. Their addition made the visual decoding of the graphs less accurate. Figure 3 illustrates this problem. Users visually decode a column graph by judging the height of each column. Numerals placed on top of columns add height to the columns—proportionately more height to a short column than a tall column. Because the software behind the dashboard placed some numerals within the column and other numerals on top of the column, the perception problem associated with the addition of data labels was magnified.

Figure 3 

Columns with Value Labels: as Presented, and as Visualized by the User

In Figure 3, the value (35) encoded by the green column is 70 percent of the value (50) encoded by the blue column. As visualized by the user, however, the green column is about 85 percent as tall as the blue column. A clinician review of PRO data may occur “at a glance” before or during a patient exam. Such a visual distortion could affect interpretation of longitudinal or comparative PRO data.

The reviewer recommended removing value labels from the column graphs. If users need exact values, a one-row table could be placed immediately below the graph.

Perceptual Organization

Unified colors, proximity, and bounding boxes are means to facilitate perceptual organizing of connected information (i.e., “grouping”).

The dashboard used color inconsistently to encode and group categorical items within its charts and graphs. The color blue is a key element of the CERTAIN brand identity (e.g., CERTAIN logo is blue). However PRO data for the entire CERTAIN cohort were encoded in green, while PRO data from an individual provider were encoded in blue.

Items that share basic visual characteristics, such as color, appear to belong together. For this reason, the reviewer recommended encoding PRO data for the entire CERTAIN cohort in blue, consistent with CERTAIN’s brand identity.

The two elements of the dashboard key (you and CERTAIN) were not near each other, nor was there a bounding box around the key to visually group the elements, leading to errors in interpretation and grouping of page elements.

Figure 4 illustrates these perceptual organization issues and the recommended changes.

Figure 4 

A Dashboard Webpage: as Presented, and with Recommended Changes for Color Coding and Proximity


In the case of CERTAIN, heuristic evaluation revealed issues missed by the developers, an occurrence observed by others [26, 27]. The team implemented most, but not all, of the recommendations from the heuristic evaluation. Some recommendations could not be implemented secondary to limitations of the web-based platform used to publish the CERTAIN clinician dashboard.

Feedback was collected during a CERTAIN Hub user group meeting. Users were shown the original and revised visualizations for components as well as the entire dashboard, (see Figures 3 and 4) and, when polled, all expressed a clear preference for the revised visualizations.


Congruent with the work of Tuner-Bowker [3], our feedback to date indicates the heuristic evaluation has merit in PRO development. A key benefit of heuristic evaluation is that evaluators can inspect most, or even all, areas of the product. In contrast, target users can perform only a few high-priority tasks with a system during a usability test session.

Although our heuristic evaluation occurred after initial usability testing, sequencing heuristic evaluation before initial usability testing may have benefits. By performing heuristic evaluation prior to usability testing, test participants won’t struggle with problems that would be identified by the heuristic expert and then corrected. Consequently, other important usability problems, particularly related to work- and task flow that could be best identified by intended users are more likely to be unearthed. Such preemptive work can be best employed when domain-accepted heuristics exist.

We are not aware of a published, validated, comprehensive checklist of heuristic principles related to PRO visualizations. The CERTAIN heuristic evaluator used heuristic principles generally applicable to visualizations of quantitative information and applied them to the PRO context. Work by heuristic evaluators in the study by Turner-Bowker et al. seemed to use a similar approach. They identified the need for improved instructions and text formatting, increased font size, page setups that avoid scrolling, and simplified presentation of feedback reports [3]. We caution PRO developers and researchers not to interpret heuristic recommendations derived from our experience, or other early PRO heuristic evaluation work, as a definitive or comprehensive set of PRO heuristics, even in the aggregate.

We believe that efforts such as ours are just a first step toward delineating accepted, PRO visualization-specific heuristics. HCI studies invite the development of visualization heuristics that recognize visual principles as well as usability guidelines [25]. Indeed, our initial work in this area revealed many questions that need to be explored, including the following:

  • Would clinicians prefer to look at one PROvisualization at a time selected from a menu? Like the system used in Turner-Bowker [3], the CERTAIN HUB provides a dashboard to clinicians.
  • Will busy clinicians take the time to filter the data and look for detailed information or are their needs limited to an overview?
  • Is it helpful or distracting to provide supplemental clinical information about individual patients on the same screen as their individual PRO scores when PRO systems are integrated into an electronic medical record (EMR)?
  • Is it helpful to show the minimum clinically important difference (MCID) for a change in PRO score?
  • What types of graphs are appropriate to share with patients during a clinical encounter?

We call for efforts to extend this initial work to develop heuristic principles for PRO visualizations— both as stand-alone systems and those integrated with EMRs—to benefit the design of these systems and ultimately improve PRO acceptance, use, and diffusion.


To improve clinician experiences and patient outcomes, PRO systems must be effectively designed and appropriated for use as an addition to existing clinical workflows. While heuristic evaluations are regularly used in many contexts to benefit usability their use to evaluate PRO data visualizations is surprisingly less common. Though the timing and application of the heuristic evaluations differed, we, like Turner-Bowker et al. [3], found heuristic evaluation to be beneficial and an efficient complement to other HCD methods. Furthermore, incorporating heuristic evaluation into PRO system development may be particularly valuable in cases where budgets will not accommodate extensive usability testing. Future work regarding the use of heuristics in the PRO domain may formally measure the additive value of heuristic evaluation in identifying user experience issues (e.g., akin to work by Hasan et. al. in the context of e commerce sites [4]).

Our commentary is, at its core, a call to action. Specifically to achieve the goals of improved clinician experience and improved patient outcomes, we assert the need for PRO-specific heuristics. Given the importance of visualizations to the clinical use of PROs and the nuances of the users and context, the study of PRO heuristics merits attention. The delineation of PRO visualization-specific heuristics would enhance the power of a heuristic evaluation. In addition, developers and researchers may leverage emerging PRO-visualization-specific heuristics as a foundation in HCD studies, user experience testing, and PRO system evaluation. Underlying our message is the notion that PRO-visualization-specific heuristics would not become a rule book, but would serve as a guide to the way we think about the increasingly crucial discipline of visual communication of quantitative data within the clinical context.

Suboptimal design will simply not work in a busy health care setting where the use of PRO data is not yet mainstreamed into patient care. Efforts extending the experience of using heuristic evaluation for the CERTAIN Hub dashboard are underway to develop PRO heuristics from the clinician perspective. We call on others to join this effort to yield development efficiencies, better design, and more accurate interpretations of PRO information within the context of clinical use.