Sign languages are as diverse as the cultures they come from, with each one featuring thousands of unique signs. While this richness is amazing, it also makes sign languages tricky to learn and translate. That’s where artificial intelligence (AI) comes in.
Researchers are now using AI to improve word-level sign language recognition—the process of turning signs into words—and a team from Osaka Metropolitan University has taken a big step forward in making it more accurate.
The challenge with older methods is that they’ve mainly focused on the signer’s general movements. That sounds fine in theory, but it often misses the small but important details, like subtle changes in hand shapes or how the hands move in relation to the body. These nuances can completely change a sign’s meaning.
To tackle this, Associate Professors Katsufumi Inoue and Masakazu Iwamura, along with their colleagues—including a team from the Indian Institute of Technology Roorkee—decided to go beyond just tracking movements. They added data about hand and facial expressions and mapped out the position of the hands in relation to the body. This gave the AI more context, helping it “read” signs with much greater accuracy.
We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods. In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries.
Katsufumi Inoue, Associate Professor, Graduate School of Informatics, Osaka Metropolitan University
Journal Reference
Maruyama, M. et. al. (2025) Word-Level Sign Language Recognition with Multi-Stream Neural Networks Focusing on Local Regions and Skeletal Information. IEEE Access. doi.org/10.1109/ACCESS.2024.3494878