Posted in | News | Machine-Vision

Study Emphasizes the Need for Public Education on the Effect of AI Algorithms

A new set of experiments shows that artificial intelligence (AI) algorithms can impact the choice of people for possible romantic partners or fictitious political candidates, based on whether suggestions were clear or hidden.

Potential date? Image Credit: Karras, T., Laine, S., & Aila, T. (2018). A Style-Based Generator Architecture for Generative Adversarial Networks. Retrieved from http://arxiv.org/abs/1812.04948.

Ujué Agudo and Helena Matute from Universidad de Deusto in Bilbao, Spain, describe the results of their study in the open-access journal PLOS ONE on April 21st, 2021.

Right from Facebook to Google search outcomes, people experience AI algorithms on a daily basis. Private companies are carrying out wide research on the data of their users, thereby gaining insights into human behavior that are not openly available. Academic social science research trails behind private studies, and public awareness is lacking on how AI algorithms may shape the choices of people is lacking.

To take a fresh look at the concept, Agudo and Matute carried out a range of experiments that investigated the impacts of AI algorithms in various contexts. They hired participants to communicate with algorithms that provided photos of online dating candidates or fictitious political candidates, and instructed the participants to point whom they would message or vote for.

The algorithms prioritized a few candidates over others, either clearly (for example, “90% compatibility”) or covertly, for example, by displaying their photos more frequently compared to that of others.

On the whole, the experiments demonstrated that the algorithms had a considerable impact on the choices of the participants on whom to message or vote for. In the case of political decisions, explicit manipulation considerably impacted decisions, whereas covert manipulation was not efficient. The opposite effect was observed for dating decisions.

The team speculates that these findings may reflect the choice of people for human explicit advice with regards to subjective matters like dating, whereas people may choose algorithmic advice on rational political decisions.

In lieu of the study results, the researchers lend their support for initiatives that are aimed at promoting the reliability of AI, for example, the Ethics Guidelines for Trustworthy AI of the European Commission and explainable AI (XAI) program from DARPA. Yet, they warn that more open-source research is required to comprehend human vulnerability to algorithms.

At the same time, the team calls for measures to educate the public on the threats of blind trust in suggestions from algorithms. Moreover, they stress the need for discussions on ownership of the data that powers such algorithms.

The authors added that “If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing actually customized profiles of the participants (and using the same photographs in all cases), a more sophisticated algorithm such as those with which people interact in their daily lives should certainly be able to exert a much stronger influence.”

The study received financial support from the Grant PSI2016-78818-R from Agencia Estatal de Investigación of the Spanish Government, and Grant IT955-16 from the Basque Government, both awarded to Helena Matute.

Journal Reference:

Agudo, U & Matute, H., (2021) The influence of algorithms on political and dating decisions. PLOS ONE. doi.org/10.1371/journal.pone.0249454.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.