Neutrosophic Logic Revolutionizes Sign Language Recognition

In a recent paper published in the journal Computers, researchers proposed a neutrosophic hidden Markov model (NHMM) to address the challenges of signer-independent sign language recognition (SLR). This approach aimed to account for the inherent uncertainty and variability in sign gestures and their observations using neutrosophic logic.

Neutrosophic Hidden Markov Model for Sign Language Recognition
Schematic diagram of a hand gesture recognition system. Image Credit: https://www.mdpi.com/2073-431X/13/4/106

The research also presented a feature extraction method based on singular value decomposition (SVD) to reduce the complexity of gesture images. Moreover, the performance of the proposed NHMM on a large vocabulary of Arabic sign language gestures was evaluated by comparing it with existing methods.

Background

SLR is a challenging task that translates hand gestures into written or spoken form. It bridges communication gaps between deaf and hearing individuals and enables gesture-based human-computer interaction. However, SLR faces hurdles such as the vast vocabulary size, limited training data, and variability in gesture execution.

Additionally, inherent uncertainty and ambiguity in sign recognition pose challenges. HMMs are powerful tools for sequential data like gestures, but their limitations hinder SLR applications. For instance, HMMs assume a Gaussian distribution for observations, meaning that they struggle with the natural variations in gestures. Furthermore, they have difficulty scaling large vocabularies and cannot handle the uncertainty and ambiguity present in real-world sign recognition.

To address these limitations, researchers explored alternative approaches like Neutrosophic logic. Neutrosophic logic is a generalization of fuzzy logic that allows for a more explicit representation of indeterminacy, uncertainty, and ambiguity. It uses Neutrosophic sets with truth, indeterminacy, and falsity membership functions. These functions can capture the variations in gesture execution by assigning degrees of truth and falsity that account for natural imperfections.

Additionally, the indeterminacy function can handle the ambiguity present in some signs that might have slightly different hand shapes but convey the same meaning. By incorporating these aspects, NHMMs have the potential to improve the accuracy and robustness of SLR systems.

About the Research

To enhance the scalability and robustness of SLR systems, the authors proposed a novel approach based on NHMMs. Their technique involves several key steps:

  • Hand region detection: The system first isolated the hand region from the background using a color-based segmentation method. This method leveraged the luma (Y), chroma (Cb), and chroma (Cr) (YCbCr) color space and histogram-based thresholding to identify skin pixels effectively.
  • Feature extraction: To reduce data complexity while preserving key information, the authors employed SVD on the segmented hand image. The resulting singular values served as the feature vector for each gesture.
  • Gesture recognition: Finally, the system utilized an NHMM to recognize gestures based on the extracted features. This NHMM incorporated elements like the initial distribution, the Neutrosophic transition probability matrix, and the observation matrix. The NHMM parameters are estimated using a Neutrosophic version of the Baum-Welch algorithm, an iterative method that optimizes the model's likelihood based on training data. Gesture recognition is performed using a Neutrosophic version of the Viterbi algorithm, which efficiently finds the most probable sequence of hidden states given the observed features.

The study assessed the performance of the NHMM-based SLR system using a dataset comprising over 6000 isolated gesture signs obtained from various sign language dictionaries. This dataset encompassed a large vocabulary and incorporated variations in factors like brightness, scaling, distortion, rotation, and viewpoint.

The proposed system's performance was compared against existing methods like HMM, fuzzy HMM, and interval type-2 fuzzy HMM. The evaluation metric employed was the gesture recognition rate, calculated as the ratio of correctly recognized gestures to the total number.

Research Findings

The outcomes revealed that the novel technique achieved an impressive gesture recognition rate of 98.5 %, surpassing other methods. Moreover, the results demonstrated the proposed system's robustness to gesture variations and its ability to handle data uncertainty and ambiguity. The authors attributed the system's superior performance to the utilization of Neutrosophic logic, which offered enhanced flexibility and expressiveness in modeling gesture recognition challenges.

The proposed system holds promise for various applications across different domains:

  • Communication: It can facilitate communication between deaf individuals and those with normal hearing by accurately transcribing sign language into written or spoken form. Additionally, it enables gesture-based communication between humans and computers, allowing gestures to serve as input commands or queries.
  • Education: The system aids in educating deaf individuals by providing feedback and guidance on gesture execution and pronunciation. Moreover, it supports sign language learning through interactive and adaptive lessons and exercises.
  • Entertainment: It enhances the entertainment experience for deaf individuals by offering subtitles or voice-overs for movies, shows, games, or music that utilize sign language. Furthermore, the system enables the creation of innovative content based on sign language, such as poems, stories, songs, or animations.

Conclusion

In summary, the NHMM-based novel approach aimed to enhance the scalability and robustness of SLR systems by addressing the uncertainty and ambiguity in gesture recognition. It achieved a notable gesture recognition rate and surpassed alternative methods.

Moving forward, the researchers proposed directions for future research, including expanding the system to accommodate continuous and dynamic gestures, integrating facial expressions and body movements, and enhancing computational efficiency and memory usage.

Journal Reference

Al-Saidi, M.; Ballagi, Á.; Hassen, O.A.; Saad, S.M. Cognitive Classifier of Hand Gesture Images for Automated Sign Language Recognition: Soft Robot Assistance Based on Neutrosophic Markov Chain Paradigm. Computers 2024, 13, 106. https://doi.org/10.3390/computers13040106, https://www.mdpi.com/2073-431X/13/4/106

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Article Revisions

  • May 14 2024 - Title changed from "Neutrosophic Hidden Markov Model for Sign Language Recognition" to "Neutrosophic Logic Revolutionizes Sign Language Recognition"
Muhammad Osama

Written by

Muhammad Osama

Muhammad Osama is a full-time data analytics consultant and freelance technical writer based in Delhi, India. He specializes in transforming complex technical concepts into accessible content. He has a Bachelor of Technology in Mechanical Engineering with specialization in AI & Robotics from Galgotias University, India, and he has extensive experience in technical content writing, data science and analytics, and artificial intelligence.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Osama, Muhammad. (2024, May 14). Neutrosophic Logic Revolutionizes Sign Language Recognition. AZoRobotics. Retrieved on November 21, 2024 from https://www.azorobotics.com/News.aspx?newsID=14866.

  • MLA

    Osama, Muhammad. "Neutrosophic Logic Revolutionizes Sign Language Recognition". AZoRobotics. 21 November 2024. <https://www.azorobotics.com/News.aspx?newsID=14866>.

  • Chicago

    Osama, Muhammad. "Neutrosophic Logic Revolutionizes Sign Language Recognition". AZoRobotics. https://www.azorobotics.com/News.aspx?newsID=14866. (accessed November 21, 2024).

  • Harvard

    Osama, Muhammad. 2024. Neutrosophic Logic Revolutionizes Sign Language Recognition. AZoRobotics, viewed 21 November 2024, https://www.azorobotics.com/News.aspx?newsID=14866.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.