Artificial Intelligence can Help Consumers Form Accurate News Assessments

Warnings about misinformation are now regularly posted on Twitter, Facebook, and other social media platforms, but not all of these cautions are created equal. New research from Rensselaer Polytechnic Institute shows that artificial intelligence can help form accurate news assessments -- but only when a news story is first emerging.

These findings were recently published in Computers in Human Behavior Reports by an interdisciplinary team of Rensselaer researchers. They found that AI-driven interventions are generally ineffective when used to flag issues with stories on frequently covered topics about which people have established beliefs, such as climate change and vaccinations.

However, when a topic is so new that people have not had time to form an opinion, tailored AI-generated advice can lead readers to make better judgments regarding the legitimacy of news articles.

The guidance is most effective when it provides reasoning that aligns with a person's natural thought process, such as an evaluation of the accuracy of facts provided or the reliability of the news source.

For more information on how AI can counter fake news, watch this video.

"It's not enough to build a good tool that will accurately determine if a news story is fake," said Dorit Nevo, an associate professor in the Lally School of Management at Rensselaer and one of the lead authors of this paper. "People actually have to believe the explanation and advice the AI gives them, which is why we are looking at tailoring the advice to specific heuristics. If we can get to people early on when the story breaks and use specific rationales to explain why the AI is making the judgment, they're more likely to accept the advice."

This two-part study, which involved nearly 800 participants, began in late 2019. The nearly simultaneous onset of the COVID-19 pandemic offered the researchers an opportunity to collect real-time data on a major emerging news event.

"Our work with coronavirus news shows that these findings have real-life implications for practitioners," Nevo said. "If you want to stop fake news, start right away with messaging that is reasoned and direct. Don't wait for opinions to form."

This research paper, "Tailoring Heuristics and Timing AI Interventions for Supporting News Veracity Assessments," is an example of the New Polytechnic, the collaborative model that encourages partnership and cooperation across disciplines in research and education at Rensselaer.

In addition to Nevo, the team at Rensselaer included Lydia Manikonda, an assistant professor at the Lally School of Management; Sibel Adali, a professor and associate dean in the School of Science; and Clare Arrington, a doctoral student of computer science.

The other lead author was Benjamin D. Horne, a Rensselaer alumnus and assistant professor at the University of Tennessee.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.