Reviewed by Lexie CornerFeb 18 2025
A recent study from the University of South Australia, published in Frontiers in Artificial Intelligence, examined public trust in artificial intelligence (AI) across different decision-making contexts. The findings indicate that people are more likely to trust AI in low-stakes scenarios, such as music recommendations, but are less inclined to do so in high-stakes situations, such as medical decision-making.
AI systems are widely used to personalize content, including social media feeds and streaming service recommendations, by analyzing user behavior and preferences. While AI-generated recommendations in everyday contexts are generally accepted, there is uncertainty regarding its role in critical decision-making, such as hiring or medical diagnostics.
The study found that individuals with limited knowledge of AI or statistics showed similar levels of trust in AI for both simple and complex decisions. Researchers analyzed responses from approximately 2,000 participants across 20 countries and found that statistical literacy influences trust in AI. Individuals who understand that AI predictions are based on pattern recognition—while also recognizing the associated risks and biases—were less skeptical of AI in low-stakes applications but more cautious in high-stakes scenarios.
The study also identified demographic variations in AI skepticism. Participants from highly industrialized countries, including the US, UK, and Japan, as well as older individuals and men, exhibited greater skepticism toward AI-driven decision-making.
With 72 % of organizations integrating AI into their operations, adoption rates are increasing.
Dr. Fernando Marmolejo-Ramos, lead author and expert in artificial and human cognition, claims that the rate at which smart technologies are used to delegate decision-making surpasses the capacity to incorporate them into society effectively.
Algorithms are becoming increasingly influential in our lives, impacting everything from minor choices about music or food, to major decisions about finances, healthcare, and even justice. But the use of algorithms to help make decisions implies that there should be some confidence in their reliability. That’s why it’s so important to understand what influences people’s trust in algorithmic decision-making.
Dr. Fernando Marmolejo-Ramos, Adjunct Research Associate, University of South Australia
“Our research found that in low-stakes scenarios, such as restaurant recommendations or music selection, people with higher levels of statistical literacy were more likely to trust algorithms. Yet, when the stakes were high, for things like health or employment, the opposite was true; those with better statistical understanding were less likely to place their faith in algorithms,” Dr. Marmolejo-Ramos added.
According to Dr. Florence Gabriel of UniSA, a concerted effort should be made to increase general public statistical and AI literacy so that individuals can more accurately choose when to trust algorithmic choices.
An AI-generated algorithm is only as good as the data and coding that it’s based on. We only need to look at the recent banning of DeepSeek to grasp how algorithms can produce biased or risky data depending on the content that it was built upon. On the flip side, when an algorithm has been developed through a trusted and transparent source, such as the custom-build EdChat chatbot for South Australian schools, it’s more easily trusted.
Dr. Florence Gabriel, Enterprise Fellow, University of South Australia
“Learning these distinctions is important. People need to know more about how algorithms work, and we need to find ways to deliver this in clear, simple ways that are relevant to the user’s needs and concerns. People care about what the algorithm does and how it affects them. We need clear, jargon-free explanations that align with the user’s concerns and context. That way, we can help people to responsibly engage with AI,” Dr. Gabriel added.
Journal Reference:
Marmolejo-Ramos, F. et. al. (2025) Factors influencing trust in algorithmic decision-making: an indirect scenario-based experiment. Frontiers in Artificial Intelligence. doi.org/10.3389/frai.2024.1465605