Posted in | News | Biomimetic Robotics

Multi-Modal Communication Could Enhance Robot Avatars

UK-based scientists have found that communicating through actions is more effective than communicating through words when it comes to the world of robots. The team found that when robot avatars use their hands to talk, they were able to communicate as effectively as human beings.

A man is getting the NAO robot to wave. Credit: Paul Bremner

Avatars came into existence in the 1980s, and today they are being used by millions of people worldwide. They have also become a vital source of income, from artificial intelligence and psychotherapy through to social media and high-end video games. Avatars are utilized to market products, to resolve issues, and to teach and entertain human beings.

The avatars have become highly advanced, and their demand has increased exponentially in society. These factors make it important to enhance the communication between humans and avatars. Using avatars to convey the message has become more and more critical, and finding a technique to effectively enhance this communication presents a major challenge.

Scientists Ute Leonards and Paul Bremner, performed a study to understand these avatars. Their study involved the concept that, if both hand gestures and speech are used together by avatars, then human beings will be able to understand them in a better way. Using both gesture and speech together to communicate is called multi-modal communication. As iconic gesture has only one specific meaning, like opening a book or door, it would be easy to understand if both iconic hand gestures and speech are used together.

The scientists wanted to test whether people would understand if avatars used multi-modal language to communicate, similar to a human actor. They also wanted to investigate whether an avatar’s multi-modal communication is more effective than speech alone.

To analyze this assumption, the researchers recorded a human film actor who performed certain iconic gestures and simultaneously delivered verbal dialog. They also recorded an avatar, utilizing this recorded dialog and imitating the iconic gestures. Participants were allowed to watch the films of both the avatar and the actor and then asked to analyze the message conveyed by them. It was observed that the participants were able to understand the avatar as well as they could the actor. The study proved to be a success as it showed that multi-modal communication is more explicable and effective than speech alone.

Making an avatar gesture in the same way as humans is a significant challenge. The actor employed advanced technology when performing the gestures. His movements were monitored using a Microsoft Kinect sensor. This way, the actor’s hand gestures can be captured as data. This data was then used by the avatar to imitate the actor’s gestures. However this equipment had certain limitations, because the avatar lacks the same degree of movement or the hand shape of a human. The scientists are now planning to work on this limitation.

Regardless of the limitations, the study indicated that the new technique of conveying human gestures to an avatar was successful. According to the team the avatar’s gestures and speech can be as easily interpreted from humans. The scientists are planning to work further on this study and find a more efficient technique to translate human gestures to the avatars. This would involve studying different cultures, especially Italy, whose citizens are known for their expressive hand gestures.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.