This post is prompted by Colin Hung’s excellent introduction to healthcare trade-offs setting the stage for this week’s #HCLDR tweetchat.
What difficult health decision have you had to make? How did you decide path forward?
— Colin Hung ()
When I tweet or write or talk about workflow, I tend to be quite emphatic. I’ve been thinking about healthcare workflow and workflow technology for decades. Have had lots of practical experience, both successful and unsuccessful. So I’m very opinionated on the topic.
I’m not that way about healthcare trade-offs. There are many different perspectives and practices and what I know, or think I know, is literally several decades old! However, sharing it may be of some use, some useful grist to chew on, in a larger multi-disciplinary and heterogamous discussion and community. Plus, I’d love someone to update me with where I can go next.
I first became aware of, and interested in, the very idea of “trade-off” during my freshman year in engineering at the University of Illinois. I believe the course was Engineering Economics. Here is its definition from Wikipedia.
“Engineering economics … is a subset of economics for application to engineering projects. Engineers seek solutions to problems, and the economic viability of each potential solution is normally considered along with the technical aspects.”
It was so interesting that after a couple years I switched majors, out of engineering into Accountancy (yep, I was a pre-med accounting major, only one I’ve ever heard of). I considered economics, but much of it seemed, well, removed from reality! 🙂 My advisor mentioned I should consider “Financial Engineering'” i.e. Accountancy. At the time healthcare cost inflation was constantly in the news, so that was when I began thinking about healthcare costs and benefits and decision-making. Even thought I was an “Accy” major, I got my fill of Econ though, around a half-dozen courses, mostly requirements for the Accountancy degree.
Why am I going on about these courses? Because economics is frequently touted as the “science of trade-offs.” Colin’s post and upcoming #HCLDR tweetchat subject reminded me of that phrase, which is why I start here, regarding my interest in healthcare trade-offs.
" is the science of trade-offs: every economic decision entails a trade-off between costs and benefits" http://t.co/h4VmQlUOLu
— Charles Webster MD ()
I say “start here” because I’ve meandered and strayed a far and wide distance from purely economic models of trade-off. Into psychological models of decision making, into behavior economics, into computer models of decision making, and even into anthropology, neuroscience, and medical ethics.
It was in my medical school medical ethics class that I first encountered the idea of shared decision-making (though it wasn’t called that yet). I was fascinated to see that some of these models where based on the economic models of decision-making I’d been exposed to as an undergrad and then as a grad student in Industrial Engineering (yep, I sort of returned to the engineering fold).
By the way, if you are an economist or a expert on shared decision making, the following is what my wife calls the “Cat, Dog, Tree” version: the simplest and smallest number of ideas that can only go together in one way and achieve a goal. (If you put a cat and a dog and a tree together, the only way the story can end is with the cat up the tree and the dog barking at its base.)
A decision tree is a flowchart (hey! That’s sounds like workflow!) leading from a decision-requiring situation, such as whether or not to have surgery, through decision alternatives (have surgery, don’t have surgery), and finally to possible outcomes (surgery is successful, surgery is unsuccessful, condition resolves, condition does not resolve). Each of these possible outcomes has a probability of happening (given that the decision that flows into it was made) and a utility, or value, sometimes measured in monetary terms. The following is an example of a decision tree from Wikipedia.
By following every possible path, and multiplying probabilities times utilities, expected utility for each possible decision is estimated. The “correct” decision is the decision with the largest expected utility. If you’re uncertain about some of the numbers you plugin, then you’ll play around with them to see if it matters or not (sensitivity analysis). With respect to shared decision making, about potential benefits and costs of contemplated medical course of action, there are several important points to be made (for my purposes).
First, the probabilities should come from evidence-based medicine and/or estimated from clinical expertise and experience (I’ll not get into that debate, remember, Cat, Dog, Tree?) but the utilities/values should come from the patient. This is the Vulcan “cognitive system” mind-meld I mention in the following tweet (from a previous #HCLDR tweetchat).
Combine patient goals & values w/professional knowledge & expertise into single "cognitive system": virtual Vulcan mindmeld
— Charles Webster MD ()
Second: It’s complicated. Waaaaaay more complicated. Above is just about the simplest possible decision tree. I’ve seen decision tweets with many more layers (decision, probability, … decision, probability, utility) and way more branchy (instead of just two possible decisions, many more). The decision trees necessary to represent even slightly complicated clinical situations (say, two interacting co-morbidities) “blow up” and are basically almost impossible to explain to a patient.
Where do the numbers come from? The probabilities? Perhaps we’ll mine them from interoperable EHRs, eventually. The utilities? Turns out you can’t just ask a patient what their utility or dis-utility (negative utility) for a particular state of affairs. Well, you can, but when you plug the numbers and check them for consistency and validity, there are all sorts of problems. There have been many proposed solutions and work arounds for both problems. Meta-analysis from literature. Delphi techniques. Indirect ways to estimate patient attitudes toward risk. And more. I am favor of these and related important projects. My point is merely that formal approaches, such as I’m automatically inclined to support due to my training, are works in progress. And I look forward to hearing the reports of more progress I am sure are coming down the shared decision making pike.
(Hey, Jimmie, thank you for the following tweet!)
I like the idea of the
— James Legan MD ()
(“Option Grids are brief easy-to-read tools made to help patients and providers compare alternative treatment options.”)
I can recall when I first began to re-understand the role of formal models of decision making. My graduate advisor had a Ph.D. in Applied Mathematics from Johns Hopkins. I was struggling with some mathematical model of healthcare decision making. It’s hazy, but it might have been trying to figure out how to optimally pre-position air ambulances in the State of Illinois. I said something like, “I put in all the numbers and turned the crank but I disagree with what the decision model is telling me!” To which my advisor replied, that’s not how it works. If you disagree with the decision model, then go back and understand the problem better. Do this over-and-over, until you understand the problem. The purpose of the decision model is not to tell anyone what to do, but rather to help you better understand the real question you should be asking.
Anyway, back to the present. Perhaps in response to intrinsic limitations of formal decision models, or perhaps simply as part of getting to progressively better understanding of what the real problems and questions are, I see many informal but also more practical models being put forward. I’ve been researching those today, in preparation for the #HCLDR tweetchat, and tweeting some (see below). I am not qualified to offer a summary of the state of the art and science of shared decision making. I’m just qualified to tell you how I came to be interested in it, that it continues to fascinated me, and I look forward to tonight’s discussion of healthcare trade-offs, as well as future #HCLDR discussions.
I will end on a metaphorical note: marriage! Everything I’ve talking about so far reflects my economic and engineering training. So, naturally, I like mathematical models and computer simulations (which I’ve not discussed here, but are pretty interesting, to me that is.)
Compare the following tweet to my previous tweet. It’s really the same tweet! Goals, values, preferences of the patient combined with knowledge, competence, and experience of the clinician. The only thing different is the metaphorical means of combination, coordination, and harmonization. The marriage metaphor is such a rich sources of ideas. Think about the give-and-take, the trade-offs, and the evolution of married “cognitive systems.” I am sure there are many other potentially rich and useful metaphors out there.
half of clinical expertise is marrying patient goals, values & preferences to clinical knowledge, competence & experience
— Charles Webster MD ()
The only real point I’m making is that the formal mathematical engineering models I favor (though am admittedly rusty) are just a small corner of a rich tapestry. I acknowledge this. I believe they have a role to play, in understanding and managing healthcare trade-offs. But also they have much to learn from other traditions and schools of thought. If they don’t “feel” right, we need to understand better.
P.S. The following are some recent papers I stumbled across while preparing to write this blog post. What others do you suggest?
Shared Decision Making: Model 4 Clinical Practice based on choice, option & decision talk full pdf
— Charles Webster MD ()
The clinical decision analysis using decision tree relevant to shared decision making
— Charles Webster MD ()
Regret theory approach 2 decision curve analysis: Eliciting decision maker preference & decision-making
— Charles Webster MD ()
The Connection Between Evidence-Based Medicine and Shared Decision Making abstract only, paywall
— Charles Webster MD ()