May 5 2020
With the help of artificial intelligence (AI), researchers at Northwestern University are expediting the search for COVID-19 treatments and vaccines. The new AI-powered tool enables prioritizing resources for the most potential studies, while ignoring studies that are unlikely to derive benefits.
Amid the COVID-19 pandemic situation, scientific research is ongoing at a remarkable rate. The U.S. Department of Health and Human Services and the Food and Drug Administration have declared plans to expedite clinical trials, with hundreds of researchers examining potential treatments and vaccines.
However, the prevalent question is which study will most potentially produce real, highly demanded solutions?
For several decades, researchers have been using the Defense Advanced Research Projects Agency’s Systematizing Confidence in Open Research and Evidence (DARPA SCORE) program to propose the answer to such questions. The program depends on scientific experts to review and rate submitted research studies on the basis of how replicable they are. This process requires, on average, about 314 days—a long wait amid worldwide pandemic.
According to researchers, the machine model is as precise as the human scoring system at such predictions, and it can scale up to review more and more papers in a fraction of the time—minutes and not months.
The standard process is too expensive, both financially and in terms of opportunity costs. First, it takes take too long to move on to the second phase of testing and second, when experts are spending their time reviewing other people’s work, it means they are not in the lab conducting their own research.
Brian Uzzi, Study Lead Author, Northwestern University
Using the newly developed AI tool, Uzzi and his colleagues from the Kellogg School of Management remove the human-scoring method, which would enable policymakers and the scientific community to make faster decisions on how to prioritize funding and time on the studies that would most probably succeed.
Uzzi, the Richard L. Thomas Professor of Leadership at Kellogg and co-director of the Northwestern Institute on Complex Systems, is the corresponding author of the paper titled “Estimating the ‘Deep-Replicability’ of Scientific Findings Using Human and Machine Intelligence,” which was recently published in PNAS.
In the midst of a public health crisis, it is essential that we focus our efforts on the most promising research. This is important not only to save lives, but also to quickly tamp down the misinformation that results from poorly conducted research.
Brian Uzzi, Study Lead Author, Northwestern University
The Northwestern team created an algorithm to estimate the results of which studies will most probably be replicable. Replication, which implies that the study results can be produced with a new test population a second time, is a crucial signal indicating that study conclusions are valid.
According to the researchers, the prediction of the machine model about the probability of replicability may be more precise compared to the conventional human-scoring prediction, since it considers more of the narrative of the study, whereas expert reviewers only focus on the strength of the relational statistics in a paper.
“There is a lot of valuable information in how study authors explain their results,” added Uzzi. “The words they use reveal their own confidence in their findings, but it is hard for the average human to detect that.”
Since the algorithm investigates the words of thousands of papers, it identifies word-choice patterns hidden from human consciousness. It has a considerably wider schema from which it draws its predictions, making it an excellent partner for human reviewers, stated Uzzi.
The model developed by the researchers can be instantly used to investigate COVID-related research papers and quickly identify the ones that seem most potential.
This tool is particularly useful in this crisis situation where we can’t act fast enough. It can give us an accurate estimate of what’s going to work and not work very quickly. We’re behind the ball, and this can help us catch up.
Brian Uzzi, Study Lead Author, Northwestern University
The model itself exhibits accuracy comparable to the DARPA SCORE method. When taken together, the combination human-machine method predicts the results will be replicable with even higher precision than either method on its own.
“This tool will help us conduct the business of science with greater accuracy and efficiency,” noted Uzzi. “Now more than ever, it’s essential for the research community to operate lean, focusing only on those studies which hold real promise.”
Journal Reference:
Yang, Y., et al. (2020) Estimating the deep replicability of scientific findings using human and artificial intelligence. PNAS. doi.org/10.1073/pnas.1909046117.