Keywords

Catheter-associated urinary tract infection (CAUTI), Clinical dashboard, Fall risk, Hospital-acquired pressure ulcers (HAPUs), Participatory design

 

Authors

  1. Reeder, Blaine PhD
  2. Makic, Mary Beth Flynn PhD, RN
  3. Morrow, Cynthia PhD
  4. Ouellet, Judith PhD
  5. Sutcliffe, Britney BSN
  6. Rodrick, David PhD
  7. Gritz, Mark PhD
  8. Wald, Heidi MD, MPH

Abstract

Hospital-acquired conditions such as catheter-associated urinary tract infection, stage 3 or 4 hospital-acquired pressure injury, and falls with injury are common, costly, and largely preventable. This study used participatory design methods to design and evaluate low-fidelity prototypes of clinical dashboards to inform high-fidelity prototype designs to visualize integrated risks based on patient profiles. Five low-fidelity prototypes were developed through literature review and by engaging nurses, nurse managers, and providers as participants (N = 23) from two hospitals in different healthcare systems using focus groups and interviews. Five themes were identified from participatory design sessions: Need for Integrated Hospital-Acquired Condition Risk Tool, Information Needs, Sources of Information, Trustworthiness of Information, and Performance Tracking Perspectives. Participants preferred visual displays that represented patient comparative risks for hospital-acquired conditions using the familiar design metaphor of a gauge and green, yellow, and red "traffic light" colors scheme. Findings from this study were used to design a high-fidelity prototype to be tested in the next phase of the project. Visual displays of hospital-acquired conditions that are familiar in display and simplify complex information such as the green, yellow, and red dashboard are needed to assist clinicians in fast-paced clinical environments and be designed to prevent alert fatigue.

 

Article Content

Hospital-acquired conditions (HACs), such as catheter-associated urinary tract infection (CAUTI), stage 3 or 4 hospital-acquired pressure injury (HAPI), and falls with injury, are common, costly, and largely preventable.1 These three conditions are part of a group of high-impact HACs targeted by the US Centers for Medicare and Medicaid Services (CMS) for elimination using financial incentives at both the episode level (CMS HAC Policy) and facility level (CMS HAC Reduction Policy), in addition to public reporting.2,3 These HACs are interrelated, nursing-sensitive conditions among adult inpatients.4 Functional incontinence (leading to catheters and CAUTI), HAPI, and falls share at least four common risk factors: advanced age, mobility constraints, and impairments in physical and cognitive function.5 Thus, many adult inpatients are at high risk for developing all three HACs,5,6 and at-risk patients will be subject to concurrent implementation of multiple HAC prevention strategies. In addition, these conditions and their prevention strategies are interdependent with potential adverse risk tradeoffs related to focusing on prevention of one condition without considering the entire clinical context for a given patient.6,7 As part of a larger project to develop and implement predictive risk models in a clinical decision support tool for these three interrelated HACs, the aim of this study was to use participatory design methods in the design and evaluation of low-fidelity prototypes to inform the design of a high-fidelity prototype dashboard that visualizes integrated risks based on patient profiles.

 

DESIGN BACKGROUND

The foundation of participatory design is the engagement of targeted end-users in the design process for systems intended for their use.8,9 This study used participatory design efforts to develop low-fidelity and high-fidelity prototypes10,11 of clinical dashboard interfaces. Clinical dashboards use individual patient data retrieved from the electronic health record (EHR) to guide possible clinical decisions. A low-fidelity prototype illustrates the general look of an interface or display, whereas a high-fidelity prototype incorporates greater functionality and interactivity.10,11 Prototype fidelity can vary along a continuum of four dimensions that include the following: aesthetic refinement-for example, colors and graphic design; breadth of features-for example, number of features; degree of functionality- for example, completeness of features; and similarity of interaction-for example, input modality.11 This study's low-fidelity prototypes focused on rudimentary mockups of visual displays for evaluation of aesthetic refinement. High-fidelity prototypes planned for later stages of the design project focus on interactive features informed by low-fidelity prototype evaluation results. Our process was informed by prior visual design efforts with clinicians and other stakeholders.12,13

 

METHODS

An iterative participatory design process was employed to develop a high-fidelity visual display prototype by engaging hospital-based clinical staff participants from two different healthcare systems. The design process involved review of the clinical dashboard literature, predesign focus groups to understand design context (n = 2), design of low-fidelity visual displays, and evaluation focus groups (n = 3) to inform the design of high-fidelity dashboard prototypes. Figure 1 shows the design process flow for the study. Interview guides used at each data collection session were iteratively updated at weekly team meetings based on responses from each prior data collection session. Members of the design team were a human-centered design methodologist, a nurse scientist, a health economist, a geriatrician, two research assistants, and a graphic design research assistant. Three members of the research team have applied experience and formal qualitative research methods training. All study procedures were approved by Colorado Multiple Institutional Review Board. Study participants provided verbal informed consent prior to enrollment and received a $50 gift card incentive for participation.

  
Figure 1 - Click to enlarge in new windowFIGURE 1. Process flow for design and evaluation of visual displays.

Participants and Setting

Participants were selected based on their roles as intended end users of the clinical dashboard, with a focus on nurses, by recruiting from two hospitals in different healthcare systems (predesign and evaluation sites) in the Western Mountain Region of the United States. Inclusion criteria were (1) current or former work responsibilities with patients in the hospital study sites; (2) employment as a nurse, nurse manager, physician, nurse practitioner, or physician assistant; and (3) willingness to participate in the study. Both sites employed a version of the same EHR system.

 

Design Process

Literature Review

The health sciences literature was searched for relevant clinical decision support dashboard studies and identified three key publications to inform project design efforts.14-16 These articles were reviewed and summarized by the graphic design research assistant, who presented this information to the design team in weekly meetings (Figure 1, step 1). Briefly, Dowding et al14 conducted a review of the literature that identified nine of 11 clinical dashboard studies that used green, yellow, and red "traffic light" color coding to indicate alert status; only one study included nurse managers and charge nurses. In the design of five dashboard displays to support situation awareness, Franklin and colleagues15 identified the need to present clinical dashboard information in a ready-to-consume format in a complex and busy environment. In a review of nursing dashboard characteristics, Wilbanks and Langford16 also observed the green, yellow, and red motif as a common color-coding scheme and the need for ready-to-consume information, in addition to identifying three different types of dashboard "time-focus": retrospective, real-time, and predictive.

 

Predesign Sessions

Two focus groups, stratified by practice role, were conducted with a convenience sample of nurses (n = 2) and nurse managers (n = 3) from the predesign site in November 2017 (Figure 1, step 2). The objective of these focus groups was to rapidly identify information needs and broad design recommendations and to understand the general design context for a clinical dashboard that represents multiple competing HACs. After obtaining verbal informed consent, focus group questions pertained to contextual work factors related to care delivery for patients with multiple competing risk factors, the role of information in patient care, and general visual display preferences. Focus groups were recorded using a digital audio-recorder. Predesign site focus groups findings were summarized and communicated to the design team.

 

Low-Fidelity Prototype Design

Five low-fidelity visual displays of risk for CAUTI, HAPI, and falls were designed for evaluation by nurses, nurse managers, and providers (eg, physicians, nurse practitioners, and physician's assistants) at the evaluation site (Figure 1, step 3 in the design process) in December 2017. Low-fidelity visual displays were designed based on literature review, predesign site focus group notes, and clinical and design expertise from the project team. To prompt discussion and solicit a range of participant perceptions for comparison, a mix of familiar and unfamiliar design metaphors and color schemes were employed in low-fidelity visual display design with minimal labeling. This approach is similar to use of extreme characters from the Human Computer Interaction design literature17,18 or extreme case studies from the case study literature19,20 to highlight differences between examples that vary widely in order to enable comparative understanding of a particular domain. Figure 2 shows the five low-fidelity visual displays representing hypothetical patient data with gauge metaphors, bullseye metaphors, bar graphs, and line graphs.

  
Figure 2 - Click to enlarge in new windowFIGURE 2. Five low-fidelity visual display prototypes evaluated in this study.

Low-Fidelity Prototype Evaluation Sessions

Evaluation of visual display prototypes consisted of three focus groups and one individual participant interview (Figure 1, step 4). An individual interview was scheduled because one participant could not participate on the day of the focus group. Evaluation sessions were stratified by practice role, with a convenience sample of nurses (n = 6 total; focus group n = 5; interview n = 1), nurse managers (n = 8), and physicians and a physician's assistant (n = 4) from the evaluation site hospital in November and December 2017. The objective of these evaluation sessions was to solicit participant perceptions of the five low-fidelity visual prototypes (see Figure 2) to inform the design of high-fidelity prototypes for a clinical dashboard for CAUTI, HAPI, and falls. Sample sizes were informed by guidance from a synthesis of focus group methods21 and prior design research.8,12,22,23 After obtaining verbal informed consent, the purpose of the study was explained as attempting to understand how clinicians think about the relative risks of HACS and identify preferred ways of visualizing competing HACs for decision making. Participants were given individual hard copies for each of the five visual displays shown in Figure 2 and were encouraged to write suggested changes or ideas on the printed designs. For each visual display, participants were asked to discuss preferences, what display elements were perceived as useful and usable, what design recommendations would improve individual displays, where a dashboard for multiple HACs would fit within existing information systems and workflows, how such a dashboard might be used, contextual work factors, and the role of the type of information displayed in patient care. Evaluation sessions were conducted by the first and last authors and the graphic design research assistant. The individual participant interview was conducted by the first and third authors. All data collection sessions were recorded using a digital audio-recorder. One researcher served as observer and notetaker during each session. Results were used to inform an interactive high-fidelity prototype with multiple healthcare systems in the next phase of the design-evaluation cycle of the project.

 

Data Analysis

Participatory design data were analyzed in two ways:

 

Themes were characterized by context of use and general design requirements for an integrated HAC risk tool by analyzing combined predesign and evaluation session data from both hospital sites. Evaluation of session data was characterized by general visual display preferences as well as specific preferences for each low-fidelity prototype that emerged during the focus group conversations. All recordings were transcribed verbatim. Three members of the research team who participated in the data collection sessions analyzed all transcripts using the framework method of qualitative analysis.24 Members of the coding team read all transcripts to familiarize themselves with the data. For predesign transcripts, the first author coded each transcript to identify general information needs, barriers, and design factors. Coded results were reconciled with concepts independently identified by the second author. Final coded results were reviewed by a research assistant and discussed for clarification. For the evaluation site transcripts, the first author and the research assistant independently coded each transcript and met to reconcile disagreements to identify visual display preferences for low-fidelity prototypes, information needs, barriers to information use, design factors, and potential usefulness and uses of a HAC dashboard. The second author independently documented a short list of concepts from each transcript and these were reconciled against the independently coded results. All coded results for predesign and evaluation data were placed into a spreadsheet-based matrix for visualization and thematic analysis.

 

RESULTS

The themes for context of use and general design requirements identified from the analysis of combined data from predesign and evaluation sessions are presented in the following paragraphs, along with general visual display preferences (Table 1) and specific preferences for each of the five low-fidelity prototype visual displays (Table 2).

  
Table 1 - Click to enlarge in new windowTable 1 General Visual Display Preferences From Low-Fidelity Prototype Evaluations
 
Table 2 - Click to enlarge in new windowTable 2 Specific Preferences for Individual Low-Fidelity Prototype Visual Displays

Context of Use and General Design Requirements

Five themes were identified from analysis of the combined predesign and evaluation session data: Need for Integrated HAC Risk Tool, Information Needs, Sources of Information, Trustworthiness of Information, and Performance Tracking Perspectives. These themes are described below and followed by a list of general design requirements.

 

Need for an Integrated HAC Risk Tool

Participants at both sites acknowledged that there is currently no tool in the EHR to understand the interaction of risks for CAUTI, HAPI, and falls. There was general agreement by participants across sessions that such a tool would be useful. Potential uses of the tool were to support novice nurses, to facilitate interprofessional communication, and to serve as a training tool.

 

Information Needs

Information needs for all roles pertained to patient risk and health status as the basis for decision making about patient care. These included risk for falls, CAUTI, and pressure ulcers, in addition to risk for sepsis and other infections such as central line-associated bloodstream infection. Additional information needs were catheter duration, tethers, medications, and physician orders.

 

Sources of Information

Information needs for all roles were primarily served by EHR documentation and verbal communication with colleagues. In addition, nurses' information needs were served by color coding in the environment, such as indicator lights outside the patient room or yellow socks to indicate fall risk. While all roles participated in documentation and communication activities, nurses' assessment and EHR documentation activities play a major role in serving the information needs of nurse managers and providers. Physical therapist and occupational therapist assessment and documentation of patient risk and status were also noted as sources of information.

 

Trustworthiness of Information

There were notable differences between predesign site and evaluation site participants in statements about trustworthiness of information. Predesign site nurses and nurse managers were concerned about information trustworthiness in their practice environment. In contrast, evaluation site nurses and nurse managers did not raise the issue, while evaluation site providers were confident in the trustworthiness of available information. The predesign site had recently moved to a new EHR, which may have affected the participants' perceptions of trustworthiness compared to the evaluation site, which had a well-established EHR.

 

Performance Tracking Perspectives

Perspectives varied by site and role on the potential for personal performance tracking and tracking of others' performance. Nurses from the evaluation site were uncomfortable with the notion of individual performance tracking. They noted complexity of patients and membership in care teams as reasons for this discomfort. Predesign site nurses noted the lack of unit-level tracking ability and suggested it might help improve performance. Provider participants took this thinking one step further with the idea of self-tracking in relation to unit-level performance. Nurse managers from both the predesign and evaluation sites did not raise this issue.

 

General Design Requirements

 

* Accommodate different types of users

 

There is a need to design for different users based on age, work experience, and practice style especially given that there are three generations in the workforce

 

* Overcome alert fatigue

 

Participants at both sites identified too many alerts as a limiting factor for use of the EHR in decision-making

 

* Overcome change fatigue

 

Desensitization to change due to frequent changes in information systems was noted by nurses at both sites. [white circle] Collectively, nurse participants' focus on this topic may be due to their higher levels of EHR use for documentation versus that of other participant roles

 

* Incorporate simplicity for delivery of relevant information to improve usefulness

 

Maintain simplicity in designs due to time constraints

 

Identify a single place to access visualizations

 

Display only relevant information

 

Use existing data without creating additional work

 

Low-Fidelity Prototype Evaluation Results

Table 2 presents general visual display preferences, and specific preferences for each of the five low-fidelity prototype visual displays (Figure 2, displays 1-5), from analysis of evaluation session data.

 

General Visual Display Preferences

Evaluation site nurse, nurse manager, and provider participants reported their visual display preferences based on interactions with low-fidelity prototypes of visual displays. These preferences were grouped as themes of Familiarity, Interface Recommendations, Customizability, Time Focus of Data Views, and Linked Displays. Table 1 shows general visual display preferences with supporting quotes from participants.

 

Familiarity

Participant discussions related to the use of familiar objects, experiences, or knowledge to inform visual display designs.

 

Interface Recommendations

Suggested changes to interfaces after viewing low-fidelity prototype designs included concerns about some labels, fonts, boundary placement, interactions, and information presentation.

 

Customizability and Time Focus of Data Views

Participants also wanted to be able to customize views and filter data to reduce information overload. Participants expressed a desire for features that enabled Time Focus of Data Views for retrospective, real-time, and predictive data views.

 

Linked Displays

Another suggestion was to link displays to enable navigation between different views of data. In particular, the notion of linking gauges (visual display 1 in Figure 2) to line graphs (visual display 1, Figure 2) was widely supported.

 

Specific Preferences for Individual Low-Fidelity Prototype Visual Displays

The gauge representation with the red, yellow, and green color scheme (visual display 1, Figure 2) was widely seen as the most useful visual display of information. Participants offered specific feedback to improve the gauge display. These design recommendations were as follows:

 

* Add numeric labels

 

* Change arrow color and size to improve visibility and contrast

 

* Differentiate risk displays with increased font size, different background colors, or use of blank space

 

* Use different background color to indicate falls

 

 

Line graphs (visual display 5, Figure 2) were seen as useful by most participants because they were both familiar and represented change over time. Participants offered no specific design recommendations to improve the line graph visual displays. Bar graphs (visual display 3, Figure 2) were noted as familiar to participants, although there were mixed views regarding their usefulness for displays of HAC risk. Participants offered no design recommendations to improve the bar graph display.

 

The bullseye metaphor (visual display 4, Figure 2) was also generally perceived as not useful. It was seen as confusing because it was both unfamiliar, requiring effort to learn, and potentially conflicted with display features in existing information systems used in support of specific programs (eg, "target zero for patient harm"). The gauge showing yellow color range for CAUTI, red for HAPIs, and blue for falls (visual display 2, Figure 2) was widely viewed as confusing and not useful, with the exception of one participant. Table 2 shows specific preferences with supporting quotes for each of the five low-fidelity prototype visual displays from Figure 2.

 

High-Fidelity Prototype Design

By March 2018, high-fidelity prototypes were designed based on low-fidelity prototype results for evaluation in the next phase of the project with participants from other hospital sites. Given the popularity of the gauge metaphor in the familiar red, yellow, and green color scheme, this design template was used to address elements that focused on implementing participant-recommended interface improvements and a predictive feature within feasibility constraints and availability of project resources. Figure 3 shows a screen capture of a first iteration of a high-fidelity prototype visual display.

  
Figure 3 - Click to enlarge in new windowFIGURE 3. Screen capture of first iteration interactive high-fidelity prototype for next-phase evaluation at three healthcare systems.

This high-fidelity prototype shows a patient scenario using hypothetical patient characteristics and a cumulative risk projection drawn from a risk model developed as a different aim of the larger project. Six different patient scenarios with corresponding risk projections were developed for implementation in high-fidelity visual display prototypes and evaluation with nurses, nurse managers, and healthcare providers at four different healthcare systems in across the Midwest and Western states in the next phase of the project. Findings from low-fidelity prototype evaluations that were not implemented as features in the current high-fidelity prototype, such as interactivity to drill down from gauges to line graphs and multiple arrows on individual gauges to show current and projected risks, are reserved for future design iterations subject to availability of resources and project scope.

 

DISCUSSION

Reducing multiple HAC events continues to be a focus and challenge in healthcare. Efforts to reduce HACs and comply with regulatory policies requires interprofessional understanding of patient risk data and interpretation of data as presented within EHR systems. The aim of this study was to engage diverse healthcare professions in the evaluation of low-fidelity prototypes that provide visualizations of CAUTI, HAPI, and fall risks. Current studies continue to focus on individual HAC event reduction and system process checks such as checklists, daily surveillance, quality metric displays, and EHR alerts.25,26 This study sought to explore and understand acceptable ways to visualize the intersection of multiple HACs to facilitate patient assessment through participatory design with different clinical roles. Through multiple design phases, the findings confirmed the need for an integrated HAC risk tool as well as identified general design requirements. This novel approach to HAC visualization design provided constructive feedback from the study participants who interacted with low-fidelity decision support prototypes that resulted in a high-fidelity prototype for further evaluation.

 

Consistent with the clinical dashboard literature,14-16 participants found the red, yellow, and green dashboard colors to be most appealing and provided meaningful understanding of HAC risk. Similarly, the gauge presentation (visual display) of the data indicating risk was preferred by all participants because it provided quick understanding of HAC risks at a glance. Also consistent with the literature, participants requested retrospective, real-time, and predictive features for viewing data. While visual displays 2 (yellow/red/blue gauges) and 4 (bullseye) were generally found to not be useful to almost all of the participants, understanding why these displays were not helpful informed development of the high-fidelity dashboards. Participant comments regarding color contrasts in visual display 2 (Figure 2) and lack of familiarity as to how to interpret visual display 4 (Figure 2) provided important information about dashboard design usability. Regarding the bullseye display as it relates to prior studies (visual display 4, Figure 2), while participants in a document retrieval design study understood the concept of bullseye visualization of clusters of search results by document similarity and ranking, less experienced users were confused by the bullseye metaphor during test sessions.27 In addition, most participants in a big data visualization study considered bullseye displays of relaxed functional dependencies between data sets generally less useful than network displays.28

 

A growing body of evidence suggests that nonactionable alerts create substantial fatigue in practice and have the potential to reduce patient safety.29,30 Indeed, the concept of alert fatigue was raised by nearly all participants, with the exception of nurse participants from the predesign site. Lack of attention to the alert fatigue topic by these participants may be attributed to broad design discussions to understand the general context early in the design process. However, participants from all subsequent sessions were quick to discuss alarm fatigue and the adverse impact of too many "alerts" or "hard stops" in the EHR during the course of patient care.

 

Several themes related to context of dashboard use were identified from analysis of focus group data, which provided meaningful guidance in the development of the high-fidelity dashboard tool used in the next phase of the study. Using this approach to explore healthcare professionals' understanding of the interaction of multiple HACs, and the meaning they assign to visual displays of patient risks, enabled the research team to solicit feedback about participant needs to guide the iterative design of visual displays. Low-fidelity prototypes using familiar and unfamiliar design metaphors and color schemes identified clear understanding of participant preferences in direct comparison to visual displays that they strongly perceived as confusing or not useful.

 

CONCLUSION

Multiple HACs are a challenge for healthcare professionals to address given rising patient acuities and shorter hospital lengths of stay. Clinicians in this study, regardless of their specific role in patient care, all had positive reactions to the notion of a visual display or dashboard that could dynamically represent risks for multiple HACs, namely, CAUTIs, HAPIs, and falls, to inform clinical decision making about care. Participants uniformly preferred familiar designs and color schemes to reduce cognitive load within the fast pace of clinical settings. Participants were open to these types of tools to improve practice and patient outcomes, noting that visual displays should be driven by existing data sourced from the EHR for more patient specific assessment of HAC risk. Conversely, they were explicitly averse to visual displays that could increase alarm fatigue or that failed to quickly communicate meaningful information at a glance. Familiarity of the gauge presentation for representing data, coupled with the familiar color code system of green, yellow, red, was found to be most easily understood by participants in this study. Findings suggest a clinical dashboard based on this design guidance to visualize multiple HAC data as risks for individual patients would be acceptable and potentially useful in practice to reduce unintended patient harm. Next steps are to evaluate a high-fidelity prototype based on these design findings with clinical end users.

 

References

 

1. Rajaram R, Chung JW, Kinnier CV, et al. Hospital characteristics associated with penalties in the Centers for Medicare & Medicaid Services hospital-acquired condition reduction program. Journal of the American Medical Association. 2015;314(4): 375-383. [Context Link]

 

2. Forrest GN, Van Schooneveld TC, Kullar R, Schulz LT, Duong P, Postelnick M. Use of electronic health records and clinical decision support systems for antimicrobial stewardship. Clinical Infectious Diseases. 2014;59(Suppl 3): S122-S133. [Context Link]

 

3. Haig KM, Sutton S, Whittington J. SBAR: a shared mental model for improving communication between clinicians. The Joint Commission Journal on Quality and Patient Safety. 2006;32(3): 167-175. [Context Link]

 

4. Burston S, Chaboyer W, Gillespie B. Nurse-sensitive indicators suitable to reflect nursing care quality: a review and discussion of issues. Journal of Clinical Nursing. 2014;23(13-14): 1785-1795. [Context Link]

 

5. Inouye SK, Studenski S, Tinetti ME, Kuchel GA. Geriatric syndromes: clinical, research, and policy implications of a core geriatric concept. Journal of the American Geriatrics Society. 2007;55(5): 780-791. [Context Link]

 

6. Sourdet S, Lafont C, Rolland Y, Nourhashemi F, Andrieu S, Vellas B. Preventable iatrogenic disability in elderly patients during hospitalization. Journal of the American Medical Directors Association. 2015;16(8): 674-681. [Context Link]

 

7. Manojlovich M, Lee S, Lauseng D. A systematic review of the unintended consequences of clinical interventions to reduce adverse outcomes. Journal of Patient Safety. 2016;12(4): 173-179. [Context Link]

 

8. Reeder B, Hills RA, Turner AM, Demiris G. Participatory design of an integrated information system design to support public health nurses and nurse managers. Public Health Nursing. 2014;31(2): 183-192. [Context Link]

 

9. Jeffery AD, Novak LL, Kennedy B, Dietrich MS, Mion LC. Participatory design of probability-based decision support tools for in-hospital nurses. Journal of the American Medical Informatics Association: JAMIA. 2017;24(6): 1102-1110. [Context Link]

 

10. Rudd J, Stern K, Isensee S. Low vs. high-fidelity prototyping debate. Interactions. 1996;3(1): 76-85. [Context Link]

 

11. Virzi RA, Sokolov JL, Karis D. Usability problem identification using both low-and high-fidelity prototypes. Paper presented at: proceedings of the SIGCHI conference on human factors in computing systems; April 13-18, 1996; Vancouver, BC. [Context Link]

 

12. Le T, Reeder B, Thompson HJ, Demiris G. Health providers' perceptions of novel approaches to visualizing integrated health information. Methods of Information in Medicine. 2013;52(3): 250-258. [Context Link]

 

13. Le T, Reeder B, Yoo D, Aziz R, Thompson HJ, Demiris G. An evaluation of wellness assessment visualizations for older adults. Telemedicine and e-Health. 2015;21(1): 9-15. [Context Link]

 

14. Dowding D, Randell R, Gardner P, et al. Dashboards for improving patient care: review of the literature. International Journal of Medical Informatics. 2015;84(2): 87-100. [Context Link]

 

15. Franklin A, Gantela S, Shifarraw S, et al. Dashboard visualizations: supporting real-time throughput decision-making. Journal of Biomedical Informatics. 2017;71: 211-221. [Context Link]

 

16. Wilbanks BA, Langford PA. A review of dashboards for data analytics in nursing. Computers, Informatics, Nursing: CIN. 2014;32(11): 545-549. [Context Link]

 

17. Djajadiningrat JP, Gaver WW, Fres J. Interaction relabelling and extreme characters: methods for exploring aesthetic interactions. Proceedings of the 3rd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (DIS '00); August 17-19, 2000; New York, New York. [Context Link]

 

18. Mose Biskjaer M, Dalsgaard P, Halskov K. Understanding Creativity Methods in Design. Paper presented at: Proceedings of the 2017 Conference on Designing Interactive Systems (DIS '17); June 10-14, 2017; Edinburgh, UK. [Context Link]

 

19. Eisenhardt KM, Graebner ME. Theory building from cases: opportunities and challenges. The Academy of Management Journal. 2007;50(1): 25-32. [Context Link]

 

20. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health. 2015;42(5): 533-544. [Context Link]

 

21. Carlsen B, Glenton C. What about N? A methodological study of sample-size reporting in focus group studies. BMC Medical Research Methodology. 2011;11(1): 26. [Context Link]

 

22. Reeder B, Le T, Thompson HJ, Demiris G. Comparing information needs of health care providers and older adults: findings from a wellness study. Studies in Health Technology and Informatics. 2013;192: 18-22. [Context Link]

 

23. Reeder B, Revere D, Olson DR, Lober WB. Perceived usefulness of a distributed community-based syndromic surveillance system: a pilot qualitative evaluation study. BMC Research Notes. 2011;4: 187. [Context Link]

 

24. Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Medical Research Methodology. 2013;13: 117. [Context Link]

 

25. Koenig L, Soltoff SA, Demiralp B, et al. Complication rates, hospital size, and bias in the CMS Hospital-acquired condition reduction program. American Journal of Medical Quality. 2017;32(6): 611-616. [Context Link]

 

26. Saint S, Fowler KE, Sermak K, et al. Introducing the no preventable harms campaign: creating the safest health care system in the world, starting with catheter-associated urinary tract infection prevention. American Journal of Infection Control. 2015;43(3): 254-259. [Context Link]

 

27. Sutcliffe AG, Ennis M, Hu J. Evaluating the effectiveness of visual user interfaces for information retrieval. International Journal of Human-Computer Studies. 2000;53(5): 741-763. [Context Link]

 

28. Caruccio L, Deufemia V, Polese G. Visualization of (multimedia) dependencies from big data. Multimedia Tools and Applications. 2019;78(23): 33151-33167. [Context Link]

 

29. Winters BD. Effective approaches to control non-actionable alarms and alarm fatigue. Journal of Electrocardiology. 2018;51(6S): S49-S51. [Context Link]

 

30. Paine CW, Goel VV, Ely E, et al. Systematic review of physiologic monitor alarm characteristics and pragmatic interventions to reduce alarm frequency. Journal of Hospital Medicine. 2016;11(2): 136-144. [Context Link]