Researchers from the University of Texas at Austin developed an explainable artificial intelligence (XAI) model that predicts ICU patient discharge times while providing insights into the factors influencing these predictions, enhancing trust and usability among healthcare providers. The study was published in the journal Information Systems Research.

Image Credit: Anggalih Prasetya/Shutterstock.com
During the peak of the COVID-19 pandemic, hospitals often experienced a shortage of critical care unit beds. However, ICUs had trouble maintaining a bed supply even earlier. ICU stays account for 11% of hospital stays in the aging US population.
According to Indranil Bardhan, a Professor of Information, Risk, and Operations Management at Texas McCombs and the Charles and Elizabeth Prothro Regents Chair in Health Care Management, artificial intelligence presents a potential remedy. Hospitals can better manage their beds and, ideally, reduce costs by using AI models to forecast how long patients will stay in the intensive care unit.
According to Bardhan, while AI is effective at forecasting the length of stay, it struggles to explain why; because of this, physicians are less likely to accept and trust it.
People were mostly focused on the accuracy of prediction, and that is an important thing, The prediction is good, but can you explain your prediction?
Indranil Bardhan, Professor, University of Texas at Austin
Explaining AI Results
According to recent research, Bardhan uses a technique known as explainable artificial intelligence (XAI) to make AI's outputs easier for intensive care unit physicians to comprehend and utilize.
In collaboration with Harvard University's Shichang Zhang, UT's School of Information's Ying Ding, and McCombs doctoral student Tianjian Guo, Bardhan created a model and trained it on a dataset of 22,243 medical records between 2001 and 2012.
At admission, the model analyzes 47 distinct patient characteristics, such as age, gender, vital signs, prescription drugs, and diagnosis. It creates graphs that display the likelihood that a patient will be released within seven days. The graphs also show which characteristics affect the result most and how they interact.
In one example, the model estimates an 8.5% chance of discharge within seven days. It indicates that a diagnosis related to the respiratory system is the primary cause, with age and medication being secondary factors.
When the researchers compared their model to other XAI models, they discovered that while its explanations were more thorough, its predictions were equally accurate.
Useful Beyond ICUs
The team asked six doctors working in intensive care units in the Austin area to read and assess samples of the model's explanations to gauge how practical their model might be. Four out of the six said the model could help them better plan patient scheduling by enhancing their staffing and resource management.
Bardhan points out that the model's main drawback is the data's age. In 2014, the medical coding system used by the industry changed from ICD-9-CM to ICD-10-CM, which added a great deal of detail to diagnosis coding and classification.
If we were able to get access to more recent data, we would have loved to extend our models using that data.
Indranil Bardhan, Professor, University of Texas at Austin
His model need not be limited, however, to adult ICUs.
You could extend it to pediatric ICUs and neonatal ICUs. You could use this model for emergency room settings. Even if you are talking about a regular hospital unit, if you want to know how much or how long a patient is likely to need a hospital bed, we can easily extend our model to that setting.
Indranil Bardhan, Professor, University of Texas at Austin
Journal Reference:
Guo, T., et al. (2024) An Explainable Artificial Intelligence Approach Using Graph Learning to Predict Intensive Care Unit Length of Stay. Information Systems Research. doi.org/10.1287/isre.2023.0029.