Short Link: http://j.mp/6ahiad
Checklist: “A list used to ensure that no tasks are omitted, no important aspects are forgotten, and all key functions are checked”
Advantages of usability checklists include:
- “visible to clients and evaluators in terms of purpose”
- “quick to administer”
- “relatively comprehensive”
- “flexible and undemanding in analysis”
I like checklists, especially for pilots and kickbikers. However, my perspective on EMR usability checklists, as someone who’s taken a few courses in human factors and cognitive science–is a bit skeptical. But, I’m even skeptical about my skepticism, so at least I’m willing to be convinced otherwise.
To understand my concerns you need to realize the degree to which usability engineering is applied cognitive science.
Cognitive Science is the Iceberg
Why do I bring up cognitive science? Because EMR usability checklists are just the tip of a cognitive-science-applied-to-EMRs iceberg. If all you know or care about is the part of the iceberg that is visible and above the water, well, you don’t understand the complete picture.
Let’s start with the part of the iceberg that’s beneath the waterline. On my shelf, from one of my courses, is the first edition of Cognitive Science: An Introduction. Published by MIT Press in 1987, its introductory description of cognitive science holds up well.
One of the most important intellectual developments of the past few decades has been the birth of an exciting new interdisciplinary field called cognitive science. Researchers in psychology, linguistics, computer science, philosophy, and neuroscience realized that they were asking many of the same questions about the nature of the human mind and that they had developed complementary and potentially synergistic methods of investigation. The word cognitive refers to perceiving and knowing. Thus, cognitive science is the science of mind. Cognitive scientists seek to understand perceiving, thinking, remembering, understanding language, learning, and other mental phenomena. Their research is remarkably diverse, ranging from observing children, through programming computers to do complex problems, to analyzing the nature of meaning.
Since the above description was written, anthropology has been admitted to the fold. While I haven’t taken any actual courses in it, I’ve followed anthropology’s contributions to cognitive science and medical informatics, including theories of distributed and team cognition. An aside: My interest in anthropology began when I read Spradley’s Participant Observation, published just before I started medical school. My intention was to keep a set of field notes about my experiences. Swamped, I shelved the project. However, it turned out that one of my anatomy lab mates was studying me! Segal, Daniel. 1988. “A Patient So Dead: American Medical Students and Their Cadavers.” Anthropological Quarterly 61:17-25. It happened again during my Intelligent Systems degree at Pitt. My interest remains piqued.
By the way, in a previous post I distinguished between traditional EMRs, based on declarative representations of medical knowledge and patient data, and EMR workflow systems in which procedural knowledge about workflows and processes is represented. The declarative/procedural distinction is a classic topic in cognitive science .
Usability is the Tip of the Iceberg
The most frequently cited definition of usability is from the International Organization for Standardization:
The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
In a three hour tutorial I used to give at TEPR (2004-2006), called EHR Workflow Management Systems: The Key to Usability, I would critique definitions of usability from “User friendly” to the ISO 9241 definition.
(One of these days I’m going to update and publish those slides. Stay tuned. $300 value–I believe that was what about what TEPR charged about a hundred folks to attend.)
I tweaked the ISO definition (am I in trouble?) to emphasize the relevance of EMR usability to the collaborative performance of teams of users.
- goal-to-goals, and
–is due to my engineering background. It is the entire system of patients, parents, guardians, specialists, subspecialists, primary care physicians, physician assistants, nurses, staff, acute and subacute participants in all the workflows and processes of patient health that needs to be optimized. Even if EMR usability checklists work, with respect to a single user, goal, and environment, there is no guarantee that optimizing single user usability won’t in suboptimize higher level global system goals. So I prefer a definition of usability that emphasizes team, rather than individual, performance.
As soon as one begins to think about usability in terms of cognition distributed across teams of humans embedded in a EMR workflow matrix, what I call the “three multis” come to the fore:
- multi-site, and
The three multis–spanning time, space, and specialty–are relevant to the pediatric medical home model, in a systems engineering sort of way. I mentioned the three multis in my 2003 white paper, “EMR Workflow Management: The Workflow of Workflow” (page 6), but I’ll more systematically highlight their relevance to the goals of the medical home model in a future post.
Why Am I Skeptical?
As I do appear to like usability, the topic, why am I skeptical about the use of checklists to measure EMR usability?
- Most checklists I have examined are not based on sophisticated notions of EMR workflow management. There is a deep and profound connection between workflow and usability (previous post: “Pediatric EMR Usability: Natural, Consistent, Relevant, Supportive, Flexible Workflow“). Since most EMRs are not workflow systems, the checklists I have seen don’t do this connection justice.
- If usability is relative to a specified user (such as pediatrician) and goal (managing a pediatric patient) in a particular environment (a real pediatric practice), how can someone–who is not the specified user (usability expert, not pediatrician), does not have the same specified goal (measuring usability, not managing a pediatric patient), and does not operate in the user’s particular environment (simulated, not real)–accurately estimate usability? There are ways around this, such as participant observation and other techniques to study cognition in the wild. But they do not lend themselves to checklists.
- Folks underestimate the long-term strategic cost of discouraging new, different, innovative, and improved EMR user interfaces, when they argue that current EMR user interfaces should be standardized to maximize positive transfer of learning between them. Recall the resistance from some DOS users to adopting the graphical operating systems from Apple and Microsoft? (By the way, the Xerox Star 8010 Dandelion, the first graphical user interface I used, predated the Apple Lisa by two years and Windows 1.0 by four years: Video 1. Video 2. Look familiar?). Usability checklists developed for DOS applications would have retarded, not encouraged, long-term OS UI usability. Let’s not make that mistake with EMRs.
Apply cognitive science to improve the human-computer interface; you get usability engineering. Apply usability engineering to improve the physician-EMR interface; you get EMR usability checklists (among other things). These checklists are the distilled residue of a tremendous amount of theoretical and experimental investigation. To adopt any of these checklists without understanding the cognitive science behind the usability, or the systems engineering behind the engineering, is to mistake the tip of the iceberg for what keeps it afloat.
Come to think about it, I’m not skeptical about EMR usability checklists, just their unskeptical use.
OK. That was me playing devil’s advocate.
5:48 AM, April 1, 2007
Nuts (for squirrels)
Extra shirts (if summer)
Allen wrench for stems black tape on it
Silver multitool thingy
Two ball end wrench for lever on allen wrench