Oct 1 2020
Smart algorithms are used to compose music, write poems, and create paintings.
In a study performed by an international research team from the Massachusetts Institute of Technology (MIT), and the Center of Humans and Machines at the Max Planck Institute for Human Development, whether individuals perceive artificial intelligence (AI) as the clever creator of art or merely as another tool used by artists is based on how information related to AI art is presented. The study results were published in the iScience journal.
Earlier in October 2018, a work of art by Edmond de Belamie, which was produced using a smart algorithm, was auctioned at Christie’s Auction House for 432,500 USD.
Christie’s auction advertisement had claimed that the portrait was produced by AI. The media had frequently described this as the initial work of art which was not produced by a human but rather created autonomously by a machine. The proceeds of the auction went to the French artists’ collective Obvious and not to the machine.
This collective had added an algorithm with images of real paintings created by human painters and trained it to produce images independently.
Next, they chose a specific picture, printed it, gave it a tag, and commercialized it. But the programmers who actually designed the algorithms and artificial neural networks used were not mentioned, and neither did they receive any of the sale proceeds made from the painting.
Many people are involved in AI art: artists, curators and programmers alike. At the same time, there is a tendency - especially in the media - to endow AI with humanlike characteristics. According to the reports you read, creative AI autonomously creates ingenious works of art. We wanted to know whether there is a connection between this humanization of AI and the question of who gets credit for AI art.
Ziv Epstein, Study First Author and PhD Student, MIT Media Lab
As such, the scientists informed nearly 600 participants about how AI art is made and asked which individuals should be recognized for their work of art. They also established the level to which each participant humanizes AIs. The individual answers differed considerably.
However, on average, individuals who humanized AI and did not see it simply as a tool also felt that AI should receive its due for the AI art and not the people who were involved in the creation process.
When the researchers asked which individuals deserve the most recognition in the process of producing AI art, recognition was first given to those artists who offered the learning algorithms with information and also trained them, followed the curators, and finally to technicians who programmed the algorithms.
The final recognition went to the “crowd” (that is, the mass of Internet users who create the information material with which AIs are usually trained). However, respondents who humanized the AI gave more recognition to the crowd and the technicians, but proportionally less to the artists.
A similar picture emerged when respondents were queried about who is responsible, for instance, when copyright is violated by an AI artwork. In this case too, more responsibility was placed on the AIs by the ones who humanized the AIs.
A major finding of the research is that one can actively exploit whether people humanize AIs by altering the language utilized to report on AI systems in art. The creative procedure can be elucidated by explaining the fact that AI, supported merely by an artistic collaborator, conceptualises and produces new works of art.
Alternatively, the process can be outlined by explaining the fact that the AI performs the basic commands provided by the artist and that an artist conceives the artwork.
The various descriptions altered the level of humanization and thus also to whom the participants attributed responsibility and recognition for AI art from among the human actors.
Because AI is increasingly penetrating our society, we will have to pay more attention to who is responsible for what is created with AI. In the end, there are humans behind every AI. This is particularly relevant when the AI malfunctions and causes damage— for example, in an accident involving an autonomous vehicle. It is therefore important to understand that language influences our view of AI and that a humanization of AI leads to problems in assigning responsibility.
Iyad Rahwan, Study Co-Author and Director, Center for Humans and Machine, Max Planck Institute for Human Development
Journal Reference:
Epstein, Z., et al. (2020) Who Gets Credit for AI-Generated Art?. iScience. doi.org/10.1016/j.isci.2020.101515.