My “Fresh” Perspective on an Old Complaint
A few months ago, I read a very thoughtful and detailed article by Atul Gawande about why doctors hate their computers.
In this article, Dr. Gawande gave a lot of examples of how computer systems have hampered the doctor’s workflow. It began with an anecdote about the 3-year nightmare of using an EHR at his hospital: clunky interfaces and glitchy software, billions of dollars lost because of “the learning curve” of tech preventing them from seeing patients, redundant information overwhelming the caregivers, and increased demands on clinicians to enter “required fields” which were arguably not essential. The article then transitions to related discussions about other fields wrestling with tech, the socio-technological ways that computers restructured doctor-doctor interactions, the benefits EHRs could provide at their best, and how systems need both mutation and selection in order to improve.
To be honest, it was a lot to process, and it felt like 2-3 articles smashed together. But the through-line of the piece was this: the main problem with computers is that they have too many requirements on doctors which focus their attention toward screens and away from patients. The de-humanization of the clinical care is bad, he argued, because in healthcare, patients aren’t just sick; they’re also scared. He argued that when doctors used scribes to take notes and navigate the awful interfaces, both doctor and patient satisfaction seemed to improve. Essentially, I read his argument as saying “I’m not opposed to technology, but I am opposed to bad, inappropriately designed technology.”
After first reading Dr. Gawande’s article, I started to see a connection to other areas of my life where technology and society were colliding unceremoniously. Technology gives people the power to do new things that can be amazing, terrible, or both. It can scale up operations we’ve never been able to do before, enabling mass surveillance at the push of a button or perpetuating bias in algorithmic decision-making even more explicitly than unconscious human bias. And these technological “disruptions” usually happen faster than most other other aspects of society change, which means we usually don’t get the chance to sit down and rationally decide who will benefit, who will be harmed, and whether that is a good tradeoff for society.
In the case of EHRs, it seemed to me that the issue wasn’t necessarily that doctors didn’t like the interacting with bad systems, but rather that they were uncomfortable with how technology was forcing them to comply with the procedures that administrators, insurers, etc demanded of them. In the old days, the doctor was in charge of what they decided to write about a patient (despite passive aggressive reminders/trainings to remember to write “pneumosepsis” to be able to bill more than they would for just “pneumonia”). But now, technology has given power to other players, thus taking power away from the doctors as a result (now they must enter all of the information that the administrator wants to collect). I shared my thoughts on facebook to discuss it with my friends, and one of them suggested I should consider blogging some of these thoughts.
Sometimes I’m Wrong ¯\_(ツ)_/¯
Hopefully you only skimmed or skipped the facebook post, because it wasn’t particularly well-argued or cohesive. I wanted to do a better job for this first blog entry, so I decided to read more voices and get more opinions on the issue to make for a better discussion. I came across a fantastic (and rather short) book all about doctors and the growing pains of technology, “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” by Dr. Robert Wachter. This book changed my mind and convinced me that I was wrong.
The Digital Doctor was written by an M.D. to explore healthcare at the dawn of the computer age. It tells the story of the current state of EHR systems, how we got here, the tangible harms this sometimes causes, and where we could hopefully end up one day with Health IT. The author’s argument is that although computers have for sure made healthcare safer, there are still a lot of problems that need to be addressed and many of the people in power are too reluctant to touch the issues in the systems when they arise (sometimes because of irrational risk aversion, sometimes because government policies are misguided or coercive, sometimes because any criticism of Health IT can get you labeled as a technophobe and a luddite).
The entire first part of the book is dedicated to the current version of Health IT, the context it was created, the problems it mostly solves, and the failures that arise specifically because of the interventions to get computers adopted by hospitals. One example is the medical note, which used to be how doctors communicated the patient’s narrative of care from one clinician to the next, but policies which mandate that notes be comprehensive for legal and billing purposes incentivized doctors to copy and paste pages of notes every time they see a patient because they won’t get paid as much if the note is too short. At best this leads to irrelevant parts of the patient’s state being recorded “just in case,” and at worst it harms care by copy-pasting old information about a patient such as their abnormal heart rate from weeks ago. One particularly funny anecdote that Dr. Wachter cites is an instance of one patient whose daily notes all mention taking his temperature in his foot and getting a reading of 98.5 degrees F, which wouldn’t necessarily be weird if that patient’s legs hadn’t been amputated earlier that month.
After reading The Digital Doctor, I realized I’d been far too dismissive of the usual gripes about the design of EHRs. My facebook post had only briefly acknowledged clunky design before quickly moving on to what I thought was the “real” issue:
“So everyone gets mad at the EMR (which is admittedly very clunky and tbh worse designed than it needs to be) when in reality, the EMR is just the medium for competing values and interests. The tools make these conflicts more direct, and it makes the balancing act a lot harder by giving everyone more power to fight with one another.”
Dr. Wachter persuaded me that I wasn’t focusing on the right problem. While it’s important for people outside the system to think for themselves with a fresh perspective, it’s also important to take seriously those affected by an issue when they tell you what they think the problems are. Sometimes they’re right, sometimes they’re wrong, usually it’s a little bit of both. In my case, I hadn’t appreciated the impact a poor design can have on how caregivers practice. Poor design leads to alarm fatigue: because given machine that a patient is hooked up to might beep or raise warnings 150+ times per day, the nurses learn to ignore the beeps (because the machine often treats “these two drugs have been occasionally shown to interact poorly once or twice” the same as “you just ordered a 38x overdose and will poison the patient”). Poor design leads to interruptions and distractions: unlike EHRs, airplanes have the “sterile cockpit” when flying in critical junctures such as under 10,000 feet. The FAA recognizes that pilots need to devote their undivided attention to flying (rather than documenting takeoff time or fuel levels) because otherwise their performance will degrade and people will die. These design principles have not been applied to doctors working in the ICU.
The Future of Health IT
Technology should be improving care, and if the current system isn’t just inconvenient but also harmful, then the design is more than just “admittedly very clunky.” But we’re still just at the beginning of the information age. Experts agree that machines will not be intended to replace doctors but rather to change the landscape of care. Sometimes this is good (e.g. telemedicine which could eliminate many unnecessary trips to the doctor’s office) and sometimes this leads to losers (e.g. radiology has been decontextualized from the patient’s care ecosystem, sometimes even offloaded to radiologists in India or maybe one day image-to-text computer algorithms).
As I thought more about how technology could change the landscape of care, I was reminded of an old episode of my favorite podcast The Weeds from May 2016. In the first 33 minutes of this episode, they discuss The Productivity Paradox and Robert Solow’s 1987 quip “You can see the computer age everywhere but in the productivity statistics.” They spend most of their time discussing Health IT specifically.
It’s like how the advent of electricity initially only led to a slight increase in productivity until engineers realized that they shouldn’t just put electric motors where the steam engines used to be, but should rather redesign their factories with these new small motors in a more efficient way that big clunky steam-powered tech could let them do. Even seeming success stories in IT are not yet showing the gains we might expect, such as how undoubtedly search engines and Google Maps give me access to information at levels previous generations have never been able to do before, but we still are not seeing growth as wildly rapid as we saw in the 1940s and 1950s. We seem to still only tinkering with our metaphorical engine placement rather than redesigning the whole factory. Like Henry Ford said, “If I’d asked people what they wanted, they would have said, ‘faster horses.'”
One thing I will note, however, is that The Digital Doctor and The Weeds both reached very similar conclusions about the promise of technology and innovation for healthcare, and they saw similar regulatory and societal issues that need to be solved before we can really unlock that potential. Perhaps that means these sources have identified the “little bit of column A, little bit of column B” for how to solve most problems, or perhaps they share biases/assumptions. If anyone has any reading/listening recommendations for other wildly different interpretations of what we should do for the future of Health IT, I’d love to hear them! It’s always a little concerning when everyone agrees because that usually means something is being overlooked or undersold.
Now What Do I Do?
When a psychiatric patient is discharged from the hospital, their caregivers write a discharge summary about their course of treatment and why the believe the patient is stable enough to leave. Unfortunately, this assessment isn’t always correct. Some studies have found that 40-50% of patients discharged with depression and schizophrenia are readmitted within a year or less. Readmissions (especially quick ones like 30-day readmission) are bad for patient care and bad for the hospital. I’m working on a research project to try to use Machine Learning to improve risk assessments for whether a patient is low enough risk to go home based on their discharge summary.
Reading through notes is both physically and mentally exhausting for humans. It’s distressing to spend hours reading about patients suffering from psychiatric disorders with difficult lives involving homelessness, suicide attempts, alcoholism, and more. And no note is “easy”; a single note might mention both beneficial and harmful indicators, such as: a loving family, a history of substance abuse, a stable job, and an abusive relationship. Not to mention, the patient is being discharged because their caregivers do think the patient is healthy enough to leave. Based on preliminary results, it usually takes 3-6 minutes to read even a single note in order for a human to decide readmission risk one way or another.
Despite many of the successes of AI, some tasks are still hard, and natural language processing is still (for the most part) one of those tasks. Based on preliminary (not-yet-published) experiments, humans are still able to outperform the models we have tried so far.
But this situation seems like the perfect opportunity for an AND rather than an OR. If the goal is to identify high-risk patients, then it doesn’t need to be purely computer or purely human. I want to explore whether machine-generated explanations can help humans make better-informed risk assessments. The experiments are still in their early stage, but I’m especially interested in questions such as:
– Would human performance improve if shown ML-derived risk scores?
– Can we decrease the amount of time it takes to read a note while maintaining (or improving) human performance?
I can’t say for sure whether I’ll continue working on questions specifically like this, but I am absolutely fascinated by how technology changes the way we interact with our jobs and with one another. I’d love to hear any more suggestions for what to read. Perhaps I’ll realize once again that I’m focusing on the wrong thing!