May 9 2019
Doctors taking life-and-death decisions about cancer treatments, heart surgeries, or organ transplants usually do not think about how artificial intelligence might assist them.
Scientists at Carnegie Mellon University state that clinical AI tools must be designed in this way so that physicians do not have to give a thought about them.
John Zimmerman, the Tang Family Professor of Artificial Intelligence and Human-Computer Interaction in CMU’s Human-Computer Interaction Institute (HCII), stated that a surgeon might never find a need to seek advice from AI, much less allow it to make a clinical decision for them. However, an AI might provide guidance to make decisions if it were embedded in the decision-making routines already employed by the clinical team, offering AI-generated predictions and evaluations as part of the overall combination of information.
Zimmerman and his team refer to this method as “Unremarkable AI.”
The idea is that AI should be unremarkable in the sense that you don’t have to think about it and it doesn’t get in the way. Electricity is completely unremarkable until you don’t have it.
John Zimmerman, Tang Family Professor of Artificial Intelligence and Human-Computer Interaction, Human-Computer Interaction Institute, CMU
Qian Yang, a PhD student in the HCII, will discuss how the Unremarkable AI method guided the design of a clinical decision support tool (DST) at CHI 2019, the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, May 4th-9th in Glasgow, Scotland.
Yang, together with Zimmerman and Aaron Steinfeld, associate research professor in the HCII and the Robotics Institute, are working with biomedical scientists at Cornell University and CMU’s Language Technologies Institute on a DST to help doctors assess heart patients for treatment with a ventricular assist device (VAD). Although this implantable pump assists diseased hearts in patients who are unable to receive heart transplants, many recipients die soon after the implant. The DST, the development of which is in progress, employs machine-learning techniques to study thousands of cases and estimate a probability of whether the implant might help an individual.
DSTs have been designed to assist in diagnosis or to plan treatment for several medical conditions and surgical procedures; however, most fail to move from lab to clinical practice and go useless.
“They all assume you know you need help,” Zimmerman said. They usually face opposition from physicians, as most of them think that they don’t need help, or see the DST as technology developed to replace them.
Yang employed the Unremarkable AI principles to make a design of how the clinical team would interact with the DST for VADs. These teams involve mid-level clinicians, like social workers, nurse practitioners, and VAD coordinators, who regularly use computers; and surgeons and cardiologists, who respect their teammates’ advice over computational support.
Yang reported that the natural time to integrate the DST’s prognostications is at the time of multidisciplinary patient evaluation meetings. Although physicians take the final decision about when or whether to implant a VAD, the whole team is often present at these meetings and computers are being employed.
Yang’s design automatically integrates the DST prognostications into the slides prepared for each patient. Steinfeld reported that in the majority of cases, the DST information will be insignificant; however, for certain patients, or at specific critical points for each patient, the DST might offer information that requires attention.
The DST itself is still under development, but the scientists tested this interaction design at three hospitals that carry out VAD surgery, with DST-enhanced slides presented for simulated patients.
“The mid-levels—the support staff—loved this,” Yang said, as it improved their input and helped them to take part in the discussion more actively. The response of physicians was less enthusiastic, exhibiting doubts about DSTs and the belief that it was impossible to completely assess the interaction without a fully functioning system and real patients.
However, Yang expressed that physicians didn’t show the same defensiveness and feelings about being replaced by technology usually related to DSTs. They also recognized that the DST might inform their decisions.
“Prior systems were all about telling you what to do,” Zimmerman said. “We’re not replacing human judgment. We’re trying to give humans inhuman abilities.”
And to do that we need to maintain the human decision-making process.
Aaron Steinfeld, Associate Research Professor, HCII
This work was supported by the National Heart, Lung, and Blood Institute and the CMU Center for Machine Learning and Health.