Posted in | News | Medical Robotics

AI Training Method Mirrors Physician Education for Image Analysis

Mark Yatskar, Chris Callison-Burch, and Yue Yang have created neural networks for medical image recognition by emulating the training pathways of human physicians.

AI Training Method Mirrors Physician Education for Image Analysis

Yue Yang, a doctoral student in Computer and Information Science, is developing new ways to train AI to analyze medical images, by emulating the training pathway of human physicians. Image Credit: Sylvia Zhang

Mark Yatskar, Assistant Professor of Computer and Information Science (CIS), Chris Callison-Burch, professor of CIS, and Yue Yang, a doctoral student advised by Callison-Burch and Yatskar, present a novel approach of creating neural networks for medical image recognition by simulating the training pathway of human physicians in a new study that will be presented at NeurIPS 2024.

Human radiologists analyze scans using the lens of decades of training. The path leading to a physician interpreting an X-ray entails thousands of hours of academic and practical education, from studying for licensing exams to spending years as a resident.

The training pathway for artificial intelligence (AI) to interpret medical images is currently much simpler: Show the AI medical images labeled with features of interest, such as cancerous lesions, in large enough quantities for the system to identify patterns that allow it to “see” those features in unlabeled images.

The results are mediocre despite over 14,000 scholarly articles on AI and radiology being published in the past 10 years. Since most images of malignant lesions also contained rulers, Stanford researchers discovered in 2018 that an AI they trained to detect skin lesions incorrectly flagged images that contained rulers.

Neural networks easily overfit on spurious correlations. Instead of how a human makes the decisions, it will take shortcuts.

Mark Yatskar, Assistant Professor, Computer and Information Science, University of Pennsylvania

He added, “Generally, with AI systems, the procedure is to throw a lot of data at the AI system, and it figures it out. This is actually very unlike how humans learn — a physician has a multi-step process for their education.

The team’s innovative technique successfully introduces AI to medical schools by offering a set body of medical knowledge collected from textbooks, PubMed, the National Library of Medicine's academic database, and StatPearls. This online company gives practice exam questions for medical practitioners.

Yatskar further added, “Doctors spend years in medical school learning from textbooks and in classrooms before they begin their clinical training in earnest. We are trying to mirror that process.

The new approach, known as Knowledge-enhanced Bottlenecks (KnoBo), requires AI to make decisions based on established medical knowledge.

When reading an X-ray, medical students and doctors ask, is the lung clear, is the heart a normal size. The model will rely on similar factors to the ones humans use when making a decision.

Yu Yang, Doctoral Student, University of Pennsylvania

Models trained with KnoBo are ultimately not only more interpretable—clinicians can comprehend the reasoning behind the model's decisions—but also more accurate than state-of-the-art models at tasks like identifying COVID patients based on lung X-rays.

Yang stated, “You will know why the system predicts this X-ray is a COVID patient — because it has opacity in the lung.

Models trained with KnoBo are also more robust and can deal with some of the complexities of real-world data. One of the most valuable characteristics of human doctors is their ability to transfer their skills to various settings, including hospitals and patient populations. AI systems, in contrast, trained on a specific set of patients from a specific hospital rarely perform well in other contexts.

To evaluate KnoBo’s ability to assist models in focusing on salient information, the researchers tested a variety of neural networks on “confounded” data sets. They essentially trained the models on one set of patients, where all sick patients were white and all healthy patients were black, and then tested the models on patients with the opposite characteristics.

The previous methods fail catastrophically. Using our way, we constrain the model to reasoning over those knowledge priors we learn from medical documents.

Yu Yang, Doctoral Student, University of Pennsylvania

Even on confounded data, KnoBo-trained models outperformed neural networks fine-tuned on medical images by an average of 32.4 %.

The researchers hope their work will facilitate the safe use of AI in medicine. The American Association of Medical Colleges predicts a shortage of 80,000 physicians in the United States alone by 2036.

Yatskar added, “You could really make an impact in terms of getting people help that otherwise they couldn’t get because there aren’t people appropriately qualified to give that help.

Michael S. Yao and Professor James C. Gee of Penn Medicine, Yifan Wu of Meta AI, Yufei Wang of Penn Engineering, and Mona Gandhi of The Ohio State University are the other additional co-authors.

This research was conducted at the University of Pennsylvania’s School of Engineering and Applied Science and was partially funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under the HIATUS Program contract #2022-22072200005.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.