Wardenburg Medical Record Study

This project developed out of conversations with a small group of healthcare providers at our on-campus health center at the University of Colorado Boulder. The providers had noticed that the rate at which they were seeing patients dropped significantly after adopting a new medical record system. Even a year after adopting the system, they still found their productivity was significantly impacted. This inspired a partnership between our HCI research lab and the health center.

I joined this project at its inception. As such, I had an opportunity to greatly influence the approach we used. Over the course of designing the study, I took on a leadership role and eventually took on the role of principal investigator. This meant I was accountable for the design and conduct of the study with the support of my adviser Dr. Katie Siek. I even oversaw the contributions of two graduate students during the project, although I was an undergraduate.

Approach

During conversations with the staff at the health center during the project ideation phase, we found that they had limited insight into why they were having such difficulties with the system. All they knew was that the system was cumbersome and that it took a lot of extra time to complete their tasks when using it.

Early in the project we recognized that we were studying a system deeply integrated into the workflows of a complex organization. Furthermore, the users suggested that, in many cases, the system was driving their behaviors. These considerations suggested we would have a hard time separating the people and their work from the technology. In a situation like this, we felt it would be beneficial to study the problem from different perspectives. We used methodological triangulation where we came at the problem from three different approaches that all yielded different forms of insight.

First, because there was an existing system and because the staff highlighted usability as a concern, we conducted a usability evaluation using both the heuristic evaluation and cognitive walkthrough methods.

Second, we conducted an ethnographic, direct observation study. We chose this second method because we wanted to observe, in situ, how the system was being used and how it affected the work practices of the users. In total, I shadowed and took detailed field notes on the work of 10 healthcare providers (Nurse Practitioners and Physicians) for a total of 52 hours.

Facility
The layout of the clinic in which I conducted the ethnographic observations. The small unlabeled rooms are exam rooms where the providers met with their patients. The pods are the shared office spaces for the providers where most interactions with the EMR system took place. Certain exam rooms were assigned to be used by providers from different pods, which is represented by the color-coding. During the shadowing, I followed providers as they moved between spaces in the clinic and documented detailed field notes of their activities.

Third, we chose to conduct contextual interviews with the healthcare providers we observed. This allowed us to dig deeper into the reasons why providers were engaging in different behaviors that we observed during our ethnographic observations. It also gave us an opportunity to discuss the technology with users in the context of their work and ask probing questions into why they used the system certain ways. Furthermore, we could explore the providers’ experiences using the system, from their own perspectives.

In addition to the triangulation of these three methods, we also conducted a confirmatory focus group with four providers to get their feedback on our initial findings and ensure that these findings were consistent with their experiences.

Outcome

Our study uncovered a great deal about the challenges providers faced in using the electronic medical record system and the factors affecting their productivity. Some of these findings, specifically those from our ethnographic work and interviews, are summarized in our publication at PervasiveHealth.

In addition to those findings, we also discovered a significant number of usability issues (over 143 distinct usability issues from the Cognitive Walkthrough alone). Some of these were basic usability issues like having scroll windows embedded within scroll windows, which were, in turn, embedded in more scroll windows. Other usability issues were more complex and related to a mismatch between the way healthcare providers work and think about problems and the way the system conceptualized these activities.

One example of where the system design mismatched with the work practices of users related to a common pattern in how healthcare providers approach chronic illness is by tracking key metrics over time and in relation to other metrics. The system had no concept of relating measures of patient health to other measures and in relation to time (i.e., values over time). Instead, providers would keep separate paper records where they would record individual measurements from the system into a single place so they could view those values in relation to each other.

lab-interface
The interface used to view lab results in the system. For patients with chronic conditions that needed more intensive monitoring, many providers would write information from this interface into paper sheets to see multiple different values. Since they could only view a single value at a time through this interface, they would have to open each result, one by one, and transcribe the information onto paper.

This concept of an electronic flowchart, that could exist in the system, is something we worked together with the health center to propose to the vendor. Our research informed the first implementation of this feature in the system, which was originally designed for this health center, but has since been deployed more broadly.

In addition to sharing the idea of flowcharts with the developer of the system, we also presented the broader findings of our research with the company. This was a dialogue around specific improvements they could implement to improve the usability of the system. For example, here is one usability concern we identified that we presented to the vendor:

The cognitive walkthrough revealed a major issue with “match to intent” and “labeling” related to fields in certain electronic forms. In these fields, there was no search or ability to select values for the field from a list of potential options. The fields relied on the user to have a-priori knowledge of the exact text for the value they wanted. For example, in order to set a patient’s diagnosis as “esophageal reflux” they had to type either one of those words for the value to populate the field. However, there are synonyms that the provider might try typing synonyms such as “gerd” or “acid reflux”.

Even more problematic is that the field had no guidance around how the user should interact with it such that they might not even know to start typing to reveal possible values. The only guidance was the label, “dx group.” We suggested that the developer implement a more robust, flexible mechanism for . We also suggested that they ensure the labels for the fields were descriptive and clear and that they add help text to indicate that the fields require the user to start typing to reveal possible matches.