Posted in | News | Humanoids

Research Examines Human–Robot Interaction Using Non-Verbal Turing Test

People act and behave in ways that other humans identify as human-like. Researchers at IIT-Istituto Italiano di Tecnologia (IIT) have recently attempted to recreate these human-like features by including a nonverbal Turing test into a human–robot interaction challenge.

Research Examines Human–Robot Interaction Using Non-Verbal Turing Test.
Agnieszka Wykowska, Coordinator of IIT’s “Social Cognition in Human-Robot Interaction” lab in Genova, interacting with the humanoid robot iCub. Image Credit: IIT-Istituto Italiano di Tecnologia.

In a collaborative action experiment, they included human volunteers as well as the humanoid robot iCub. Researchers discovered that some aspects of human behavior, such as reaction time, may be transferred into the robot in such a manner that people cannot tell whether they are engaging with a person or a computer.

The research was published in the journal Science Robotics and is the first step toward knowing what sort of behavior robots could display in the future, taking into account the numerous conceivable domains of use, such as healthcare or industrial production lines.

Agnieszka Wykowska, head of IIT’s “Social Cognition in Human-Robot Interaction” lab in Genova and European Research Council (ERC) grantee for the project titled “InStance,” which answers the issue of when and under what situations people treat robots as intentional agents, coordinates the research group.

The most exciting result of our study is that the human brain has sensitivity to extremely subtle behavior which manifests humanness. In our non-verbal Turing test, human participants had to judge whether they were interacting with a machine or a person, by considering only the timing of button presses during a joint action task.

Agnieszka Wykowska, Head, Social Cognition in Human–Robot Interaction, Italian Institute of Technology

The study team concentrated on two fundamental aspects of human behavior: time and accuracy in reacting to external stimuli, which they had previously mapped to establishing an average human profile. Researchers built their experiment on this profile, in which participants were instructed to react to visual stimuli on a screen.

Participants separated into two human–robot couples played the game: a person partnered up with a robot, whose reaction was directed by the person from the other partnership or pre-programmed.

In our experiment, we pre-programmed the robot by slightly varying the average human response profile. In this way, the possible robot responses were of two types: on one hand, it was fully human-like since it was controlled by a person remotely, on the other hand, it was lacking some human-like features because it was pre-programmed.

Francesca Ciardo, Study First Author, Social Cognition in Human–Robot Interaction, Italian Institute of Technology

Ciardo, holds Marie Sklodowska-Curie fellow at Wykowska’s group in Genova.

People engaging with the robot were unable to distinguish whether it was human-controlled or pre-programmed under the circumstance where the robot was pre-programmed. This implies that the robot passed this variant of the nonverbal Turing test in this particular task.

The next steps of scientific investigation would be to design a more complex behavior on the robot, to have a more elaborate interaction with the humans and see what specific parameters of that interaction are perceived as human-like or mechanical.

Agnieszka Wykowska, Head, Social Cognition in Human–Robot Interaction, Italian Institute of Technology

Journal Reference:

Ciardo, F., et al. (2022) Human-like behavioral variability blurs the distinction between a human and a machine in a nonverbal Turing test. Science Robotics. doi.org/10.1126/scirobotics.abo1241.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.