Editorial Feature

Robotics that Improve Hand-Object Interaction

Image Credits: maxuser/shutterstock.com

Intention is the first step of performing an action. For example, when thinking about drinking a glass of water you are creating the intention. Moving your hand towards the glass and grasping it is the action itself. Just by looking at your behavior, others can infer your intentions and guess what action you are going to perform.

For people suffering from quadriplegia and spinal cord injury, illnesses that pose a limitation to movements, performed actions do not reflect underlying intentions. In such cases, even simple actions such as holding a glass of water are impossible.

Wearable robots have been introduced to help people with physical impairment, but recognizing the user’s intention remains a challenging task. Injury or loss of limbs during wars has been a major motivation for the attempts to duplicate the hand’s movements with the use of rather crude prosthetics. The sophisticated nature of the arms involving more than 30 muscles and numerous tactile and proprioceptive receptors has, at least until now, remained out of reach for technological advancements. The advances in actuators, sensors and material in the last couple of years allows the replication of the function of the most dexterous extremity: our hands.

Measuring Intention

One method of detection of the user’s intention is through bio-signal sensors. Examples of these are electroencephalography (EEG) and electromyography (EMG). Bio-signal sensors detect the electrical signals that the brain transmits to the muscles. Machine learning mechanisms determine the threshold of electrical activity needed to be classified as an intention.

Another means to detect the user’s intentions is through mechanical sensor-based methodologies. These include mechanical, pressure and bending sensors. Pressure sensors provide feedback about contact between the wearable robotic hand and the object to be interacted with. This feedback is the grasping intention. Bending sensors measure the joint angles of the wrist and fingers.

Nevertheless, there are still some limitations on the use of bio and mechanical signal sensors. When it comes to bio-signal sensors, an important issue that needs to be addressed is the dependency on the user. To use these sensors they need to be individually calibrated to each patient.  Moreover, pressure sensors are not suitable for people with fully impaired limbs.

A recent study created a new paradigm for detecting the patient’s intentions for hand robots. By obtaining spatial and temporal information using a first-perspective camera, users’ intentions were obtained and constructed via the use of a deep learning model. After comparing the intentions obtained from the paradigm and the electric signals from the EMG, results showed that the intentions detected by the paradigm preceded the patient’s intentions signaled by the EMG. This means that the deep learning model correctly and successfully detects a patient’s intentions to execute a certain movement.

The participants of the experiment included people with no physical disability and patients with spinal cord injury. In this experiment subjects had to perform a pick-and-place motor task. While using the model, the performance of the healthy controls was analyzed with the average grasping, lifting and releasing time of different objects. Patients were instructed to only reach for the objects without executing any other actions.

One of the advantages of this model is that it does not require individual calibration. This is because the user’s intentions are predicted using the patient’s arm behaviors and object interaction. This allows a seamless interaction in which the robot can interact with a person and augment their abilities.

Working Together

Multiple elements work together to achieve the execution of fine controlled hand movements. This process is defined as synergy. Robotics have successfully utilized the synergy framework. The Hand Embodied (THE) is a project that integrates the fields of robotics and neuroscience to replicate the team work model of movement execution.

The control of the hands is accomplished through spatial-temporal coordination. Researchers from Yale University’s GRAB Lab have utilized a variable-friction system to develop a two-finger design for hand manipulation. This is a simple, yet unique set-up is inspired by the biomechanical properties of human fingers. As the force needed to perform an action is increased, the friction of the fingers changes to accommodate the workload. This way we can choose whether to grip or slide over objects. The structure of our hands allows us to perform such controlled hand movements. The bone structure allows us to apply gripping force when we want to whereas the skin helps us maintain softer contact with objects. By using friction contact in combination with softer contact we can perform all kinds of movements. The GRAB Lab’s variable friction fingers replicate the same functionality as human fingers. This design accomplishes that by turning friction on and off to accommodate the desired action: gripping or sliding. This simple, yet clever design can be a foundation for creating even more complex robotics to improve hand-object interaction.

Conclusion

Combining sensory and motor elements of a motor action is what makes movements so versatile and complex. Using this interplay as a foundation block of improving hand-object interaction of people who have a physical disability is what drives the constant research and advanced development of rehabilitation robots. Even though we are still far from completely replicating the human movements, scientists from different fields such as neuroscience, robotics, engineering and rehabilitation have joined efforts to not only design better motor rehabilitation techniques but also find a way to completely recover motor function after injury.

Sources and Further Reading

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Mihaela Dimitrova

Written by

Mihaela Dimitrova

Mihaela's curiosity has pushed her to explore the human mind and the intricate inner workings in the brain. She has a B.Sc. in Psychology from the University of Birmingham and an M.Sc. in Human-Computer Interaction from University College London.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dimitrova, Mihaela. (2023, February 17). Robotics that Improve Hand-Object Interaction. AZoRobotics. Retrieved on November 29, 2024 from https://www.azorobotics.com/Article.aspx?ArticleID=314.

  • MLA

    Dimitrova, Mihaela. "Robotics that Improve Hand-Object Interaction". AZoRobotics. 29 November 2024. <https://www.azorobotics.com/Article.aspx?ArticleID=314>.

  • Chicago

    Dimitrova, Mihaela. "Robotics that Improve Hand-Object Interaction". AZoRobotics. https://www.azorobotics.com/Article.aspx?ArticleID=314. (accessed November 29, 2024).

  • Harvard

    Dimitrova, Mihaela. 2023. Robotics that Improve Hand-Object Interaction. AZoRobotics, viewed 29 November 2024, https://www.azorobotics.com/Article.aspx?ArticleID=314.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.