New research unveils AI’s ability to detect COVID-19 in lung ultrasound images, mimicking how facial recognition software identifies faces.
Image Credit: AGCuesta/Shutterstock.com
The discoveries advance AI-powered medical diagnostics, bringing healthcare professionals closer to rapidly diagnosing patients with COVID-19 and other respiratory illnesses. Algorithms analyze ultrasound images to detect disease indicators, facilitating quicker diagnoses.
The results, which were published in Communications Medicine, represent the pinnacle of a project that began early in the pandemic when medical professionals required instruments to quickly evaluate large numbers of patients in overcrowded ERs.
We developed this automated detection tool to help doctors in emergency settings with high caseloads of patients who need to be diagnosed quickly and accurately, such as in the earlier stages of the pandemic. Potentially, we want to have wireless devices that patients can use at home to monitor the progression of COVID-19, too.
Muyinatu Bell, Associate Professor, Department of Electrical and Computer Engineering, Whiting School of Engineering, Johns Hopkins University
Along with its potential for tracking illnesses like COVID-19-like fluid overload in patients’ lungs, the tool may also be used to develop wearables for congestive heart failure, according to Co-Author Tiffany Fong, an Assistant Professor of Emergency Medicine at Johns Hopkins Medicine.
What we are doing here with AI tools is the next big frontier for point of care. An ideal use case would be wearable ultrasound patches that monitor fluid buildup and let patients know when they need a medication adjustment or when they need to see a doctor.
Tiffany Fong, Assistant Professor and Study Co-Author, Emergency Medicine, Johns Hopkins Medicine
B-lines, which show up as bright, vertical abnormalities and indicate inflammation in patients with pulmonary complications, are features that the AI detects in ultrasonography lung images. It combines real patient ultrasounds with computer-generated images, some of whom were treated at Johns Hopkins.
Bell added, “We had to model the physics of ultrasound and acoustic wave propagation well enough to get believable simulated images. Then we had to take it a step further to train our computer models to use these simulated data to reliably interpret real scans from patients with affected lungs.”
According to Bell, early in the pandemic, researchers had difficulty using artificial intelligence to evaluate COVID-19 indicators in lung ultrasonography images due to a lack of patient data and a limited understanding of the disease’s physical manifestations.
Her group created software that can recognize abnormalities in ultrasound scans that point to the presence of COVID-19 in a person based on a combination of real and simulated data. The tool is an artificial intelligence (AI) system called a deep neural network, which is built to mimic the networked neurons in the brain that allow it to recognize patterns, comprehend speech, and perform other intricate tasks.
Early in the pandemic, we did not have enough ultrasound images of COVID-19 patients to develop and test our algorithms, and as a result, our deep, neural networks never reached peak performance. Now, we are proving that with computer-generated datasets we still can achieve a high degree of accuracy in evaluating and detecting these COVID-19 features.
Lingyi Zhao, Study First Author and Postdoctoral Fellow, Novateur Research Solutions
Zhao developed the software while he was a Postdoctoral Fellow.
This study was funded by NIH Trailblazer Award Supplement R21EB025621-03S1.
Journal Reference:
Zhao, L., et al., (2024) Detection of COVID-19 features in lung ultrasound images using deep neural networks. Communications Medicine. doi.org/10.1038/s43856-024-00463-5