AI has the potential to help determine biases in news reporting that one would not be able to see otherwise. At McGill University, scientists got a computer program to produce news coverage of COVID-19 by using the headlines from CBC articles as alerts.
Further, they compared the simulated news coverage to the real reporting at the time and discovered that CBC coverage was less concentrated on the medical emergency and more positively centered on geopolitics and personalities.
Reporting on real-world events requires complex choices, including decisions about which events and players take center stage. By comparing what was reported with what could have been reported, our study provides perspective on the editorial choices made by news agencies.
Andrew Piper, Professor, Department of Languages, Literatures, and Cultures, McGill University
According to the team, assessing such alternatives is known to be crucial, provided there is a close relationship between public opinion, media framing, and government policy.
The AI saw COVID-19 primarily as a health emergency and interpreted the events in more bio-medical terms, whereas the CBC coverage tended to focus on person rather than disease-centered reporting. The CBC coverage was also more positive than expected given that it was a major health crisis—producing a sort of rally around the flag effect. This positivity works to downplay public fear.
Andrew Piper, Professor, Department of Languages, Literatures, and Cultures, McGill University
Exploring How Biases Play Out
While numerous studies attempt to comprehend the biases inherent in AI, there is also a chance to harness it as a tool to disclose the biases of human expression, state the scientists.
The goal is to help us see things we might otherwise miss.
Andrew Piper, Professor, Department of Languages, Literatures, and Cultures, McGill University
“We’re not suggesting that the AI itself is unbiased. But rather than eliminating bias, as many researchers try to do, we want to understand how and why the bias comes to be,” stated Sil Hamilton, a research assistant and student working under the supervision of Professor Piper.
Using AI to Understand the Past, And One Day to Anticipate the Future
As far as the team are concerned this work is only the tip of the iceberg, setting a stage for new avenues of study where AI could be utilized not only to look at past human behavior but also to expect future actions—for instance, in predicting possible judicial or political results.
At present, the research team is working on a project with the help of AI to model American Supreme Court decision-making headed by Hamilton.
Hamilton stated, “Given past judicial behavior, how might justices respond to future pivotal cases or older cases that are being re-litigated? We hope new developments in AI can help.”