Medical imaging plays an important role in diagnosing and treating medical conditions. However, there is a growing demand for these imaging modalities, and with it comes a shortage of trained radiologists. One potential solution to this problem is the development of artificial intelligence (AI) tools that can automate the interpretation of medical images, thus reducing the workload on radiologists.
Image Credit: 15Studio/Shutterstock.com
While AI has shown great promise in this area, it is essential to thoroughly evaluate its performance before implementing it in a clinical setting. Recently, a study published in the journal Radiology focuses on this issue by showing that an AI tool can accurately identify normal and abnormal chest X-rays.
Artificial Intelligence for Interpreting Chest Radiographs
Chest radiography is a common imaging technique used to diagnose a variety of medical conditions. Due to the high volume of chest radiographs produced daily, there is a growing need to find efficient and accurate ways to interpret these images. Deep learning-based AI algorithms have drawn attention recently as a potentially useful tool for radiology picture classification.
Specifically, researchers have been working on developing AI models that can differentiate between normal and abnormal chest X-rays.
These models have the potential to improve the triaging of radiographs and reduce the time it takes to interpret them. In addition, there has been growing interest in developing an AI tool that can autonomously describe normal chest radiographs without human intervention. Such a technology could help alleviate the growing shortage of radiologists by automating the interpretation of normal chest radiographs.
Recent feasibility tests have yielded encouraging findings, with an AI tool accurately ruling out anomalies with high confidence in about 15% of chest radiographs generated.
Limitations and Considerations for AI Interpretation of Chest X-rays
Despite the great potential of employing AI to interpret chest X-rays, several restrictions must be considered. This study topic is still relatively new, and the existing models have not been widely adopted in clinical settings.
Moreover, the effectiveness of AI models has yet to be well-characterized in therapeutically relevant patient groups, with direct comparison to existing radiology reporting standards.
The use of AI tools for identifying normal and abnormal chest X-rays requires further investigation, as it is important to thoroughly evaluate the reliability and performance of these models in real-world clinical settings before implementing them in routine practice.
Highlights of the Current Study
To determine the reliability of using a commercial AI tool for identifying normal and abnormal chest X-rays, the researchers conducted a retrospective, multi-center research. They examined the chest X-rays of 1,529 individuals from four hospitals in Denmark's capital area. Patients from the emergency room, in-hospital patients, and outpatients had their chest X-rays taken.
The AI tool autonomously reported the chest radiography reports as either "high-confidence normal" or "not high-confidence normal," which were then compared to the reference standard provided by two board-certified thoracic radiologists. In cases where there were disagreements, a third radiologist was consulted, and all three radiologists were blinded to the AI results.
The researchers sought to determine the number of chest radiography reports autonomously reported by the AI tool, as well as its sensitivity in detecting abnormal chest radiographs. Additionally, they aimed to compare the performance of the AI tool with that of the clinical radiology report.
Important Findings and Prospects of the Research
The researchers found that the commercially available AI tool was highly accurate in detecting abnormalities on chest X-rays. The tool could potentially automate the reporting of 28% of all normal posteroanterior chest radiographs, translating to 7.8% of the entire chest radiograph production.
The sensitivity of the AI tool was higher than 99%, indicating that it could detect abnormalities with a high degree of accuracy. Interestingly, the AI tool performed better than the clinical board-certified radiologists in terms of sensitivity.
It was particularly adept at identifying normal X-rays of the outpatient group, which suggests that it would be especially useful in outpatient settings where normal chest X-rays are prevalent.
In the future, larger prospective studies could investigate the implementation of the AI tool, with autonomously reported chest X-rays still reviewed by radiologists. These studies could help establish the clinical value of the AI tool and its potential impact on the field of radiology.
“Chest X-rays are one of the most common imaging examinations performed worldwide,” said Dr. Plesner, a co-author of the study. “Even a small percentage of automatization can lead to saved time for radiologists, which they can prioritize on more complex matters.”
Reference
Plesner, L. L. et al. (2023). Autonomous Chest Radiograph Reporting Using AI: Estimation of Clinical Impact. Radiology. doi.org/10.1148/radiol.222268
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.