Posted in | News | Consumer Robotics

AI is Being Used to Boost Sign Language Recognition Accuracy

Sign languages are as diverse as the cultures they come from, with each one featuring thousands of unique signs. While this richness is amazing, it also makes sign languages tricky to learn and translate. That’s where artificial intelligence (AI) comes in.

Adding data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer’s upper body improves word recognition.

Adding data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer’s upper body improves word recognition. Image Credit: Osaka Metropolitan University

Researchers are now using AI to improve word-level sign language recognition—the process of turning signs into words—and a team from Osaka Metropolitan University has taken a big step forward in making it more accurate.

The challenge with older methods is that they’ve mainly focused on the signer’s general movements. That sounds fine in theory, but it often misses the small but important details, like subtle changes in hand shapes or how the hands move in relation to the body. These nuances can completely change a sign’s meaning.

To tackle this, Associate Professors Katsufumi Inoue and Masakazu Iwamura, along with their colleagues—including a team from the Indian Institute of Technology Roorkee—decided to go beyond just tracking movements. They added data about hand and facial expressions and mapped out the position of the hands in relation to the body. This gave the AI more context, helping it “read” signs with much greater accuracy.

We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods. In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries.

Katsufumi Inoue, Associate Professor, Graduate School of Informatics, Osaka Metropolitan University

Journal Reference

Maruyama, M. et. al. (2025) Word-Level Sign Language Recognition with Multi-Stream Neural Networks Focusing on Local Regions and Skeletal Information. IEEE Access. doi.org/10.1109/ACCESS.2024.3494878

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.