Posted in | News | Medical Robotics

Large Language Models Improve Efficiency of Hospital Quality Measures

Researchers from the University of California San Diego School of Medicine have shown that advanced technology could streamline hospital quality reporting, making it faster and more efficient while maintaining high levels of accuracy, which may enhance the overall delivery of health care.

Hospital Doctor Using Spreadsheet For Billing Codes On Desktop

Image Credit: Andrey_Popov/Shutterstock.com

The study revealed that a system utilizing large-scale language models (LLMs) achieved 90 % agreement with manual hospital quality reporting methods. This breakthrough could pave the way for more efficient, reliable health care reporting processes.

The research, conducted in collaboration with the Joan and Irwin Jacobs Center for Health Innovation at UC San Diego Health, found that LLMs can accurately interpret complex quality measures, particularly the Centers for Medicare & Medicaid Services (CMS) SEP-1 measure, which addresses severe sepsis and septic shock.

The integration of LLMs into hospital workflows holds the promise of transforming health care delivery by making the process more real-time, which can enhance personalized care and improve patient access to quality data. As we advance this research, we envision a future where quality reporting is not just efficient but also improves the overall patient experience.

Aaron Boussina, Postdoctoral Scholar and Study Lead Author, School of Medicine, University of California San Diego

Currently, abstracting data for the SEP-1 measure involves a labor-intensive, 63-step process that requires extensive chart reviews over several weeks. This study demonstrated that LLMs could dramatically reduce both the time and resources required by analyzing patient charts and generating key insights almost instantly.

The researchers believe this technology could significantly address the complexities of quality measurement and help build a more efficient, responsive health care system.

We remain diligent on our path to leverage technologies to help reduce the administrative burden of health care and, in turn, enable our quality improvement specialists to spend more time supporting the exceptional care our medical teams provide.

Chad VanDenBerg, Study Co-Author, Chief Quality and Patient Safety Officer, University of California San Diego

The study also found that LLMs can improve efficiency by correcting errors, speeding up processes, reducing administrative costs through automation, and enabling near-real-time quality assessments. The scalability of this approach across different health care settings is another promising benefit.

The research team plans to validate these results further and work towards integrating them into ongoing efforts to improve health care reporting and data reliability.

The study's co-authors include a team of experts from UC San Diego, including Shamim Nemati, Rishivardhan Krishnamoorthy, Kimberly Quintero, and several others. The research was funded by the National Institute of Allergy and Infectious Diseases, the National Library of Medicine, the National Institute of General Medical Sciences, and the Jacobs Center for Health Innovation.

Journal Reference:

Boussina, A., et al. (2024) Large Language Models for More Efficient Reporting of Hospital Quality Measures. NEJM AI. doi.org/10.1056/aics2400420.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.