Psychological Methodology to Human-Robot Interaction

It is referred to as the uncanny valley. Those who are fans of the HBO series “Westworld” or who have watched the movie “Ex Machina” may already be acquainted with the phenomenon.

Human-Robot Interaction" />
Assistant professor of psychology Dr. Nathan Tenhundfeld, left, recently established the Advanced Teaming, Technology, Automation, and Computing Lab to study human-machine teaming. (Image credit: Michael Mercier|UAH)

But for those who unaware, it is basically the idea that humans are at ease with robots possessing humanoid features, but become very nervous when the robot resembles almost but not precisely like a human.

For Dr. Nathan Tenhundfeld, however, the uncanny valley is just one of numerous factors he must take into consideration while studying human-automation interaction as an assistant professor in the Department of Psychology at The University of Alabama in Huntsville (UAH).

We’re at a unique point with the development of the technology where automated systems or platforms are no longer a tool but a teammate that is incorporated into our day-to-day experiences. So we’re looking at commercial platforms that offer the same systems but in different forms to see whether a certain appearance or characteristic affects the user and in what way.

Dr Nathan Tenhundfeld, Department of Psychology, UAH

Take, for instance, the latest push by the U.S. Department of Defence to add automation into warfighting. As an idea, it makes sense: the more robots deployed to fight wars, the lesser the loss of human life. But in reality, it is quite complex. How should a robot soldier look like? A person? A machine?

To answer these queries, Dr Tenhundfeld has collaborated with a colleague at the U.S. Air Force Academy, where he carried out a research as a postdoctoral fellow, to use “a massive database of robots” so that they establish how different components might impact the perception of a robot’s capabilities.

We want to know things like, does a robot with wheels or a track fit better with our expectation of what we should be sending to war versus a humanoid robot? And, does having a face on the robot affect whether we want to put it in harm’s way?

Dr Nathan Tenhundfeld, Department of Psychology, UAH

Even if there were simple solutions—which there are not—there is another similarly crucial factor to deliberate beyond the robot’s user interface: trust. For a robot to be operative, the user must trust the data that it is supplying.

To illustrate, Dr Tenhundfeld reflects upon the research he did on the Tesla Model X while at the Academy. Studying the car’s autoparking feature closely, he and his team wanted to establish the user’s readiness to allow the car to complete its task as a function of their risk-taking preference or self-assurance in their abilities.

The data suggest automated vehicles tend to be safer than humans, but humans don’t like to relinquish control,” he stated with a laugh. “So we had this pattern where there were high intervention rates at first, but as they developed trust in the system—after it wasn’t so novel and it started to meet their expectations—they began to trust it more and the intervention rates went down.”

The other side of that coin, however, is the prospect for empathy in, or attachment to, a specific automated system users may have developed trust in. To demonstrate this concept, he recounts a case study of explosive-ordinance disposal teams who use robots to blow up bombs without harm. “When they have to send the robots back to get repaired, they have an issue when they’re given a different robot,” he stated. “So they’ve placed this trust in a specific robot even though the intelligence/capability is the same across all of the robots.”

Plus, in case it starts to sound like there is already more than plenty for Dr Tenhundfeld to factor in, there is also situational trust, which belongs somewhere between trust and overtrust. In this situation, a user may form a certain level of trust as a whole over time, but then realize they do not trust certain features as much as others.

Say I have an automated system, or robot, providing intelligence in a mission-planning environment, and it screws that up. I might not trust it in a different environment, such as on the battlefield, even though it has a different physical embodiment for use in that environment, and may be distinctly capable on the battlefield.

Dr Nathan Tenhundfeld, Department of Psychology, UAH

In brief, the progressively digital nature of the world today introduces an apparently infinite list of considerations when it comes to guaranteeing that automated systems can positively meet human needs—all of which Dr Tenhundfeld must take into consideration the research he is conducting in his Advanced Teaming, Technology, Automation, and Computing Lab, or ATTAC Lab.

However, given UAH’s role as an academic partner to this emerging industry, it is a challenge that he and his fellow scientists have accepted. “Businesses are focused on being first to market with a product,” he stated. “We help them improve the product so that it works well for the user.”

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.