Oct 25 2016
A squeeze in the arm, a pat on the shoulder, or a slap in the face – touch is an important part of the social interaction between people. Social touch, however, is a relatively unknown field when it comes to robots, even though robots operate with increasing frequency in society at large, rather than just in the controlled environment of a factory. Merel Jung is conducting research at the University of Twente CTIT research institute into social touch interaction with robots. Using a relatively simple system – a mannequin’s arm with pressure sensors, connected to a computer – she has succeeded in getting it to recognize sixty percent of all touches. The research is being published today in the Journal on Multimodal User Interfaces scientific journal.
Robots are becoming more and more social. A well-known example of a social robot is Paro, a robot seal that is used in care homes, where it has a calming effect on the elderly residents and stimulates their senses. Positive results have been achieved with the robot for this target group, but we still have a long way to go before robots can correctly recognize, interpret, and respond to different types of social touch in the way that people can. It is a relatively little explored area in science, but one in which much could be achieved in the long term. Examples that come to mind are robots that assist children with autism in improving their social contacts, or robots that train medicine students for real-life situations.
Sixty percent
Merel Jung is therefore carrying out research at the University of Twente into social touch interaction between humans and robots. In order to enable a robot to respond in the correct manner to being touched, she has identified four different stages. The robot must perceive, be able to recognize, interpret, and then respond in the correct way. In this phase of her research, Jung focused on the first two stages – perceiving and recognizing. With a relatively simple experiment, involving a mannequin’s arm fitted with 64 pressure sensors, she has succeeded in distinguishing sixty percent of almost 8,000 touches (distributed over fourteen different types of touch at three levels of intensity). Sixty percent does not seem very high on the face of it, but it is a good figure if you bear in mind that there was absolutely no social context and that various touches are very similar to each other. Possible examples include the difference between grabbing and squeezing, or stroking roughly and rubbing gently. In addition, the people touching the mannequin’s arm had been given no instructions on how to ‘perform’ their touches, and the computer system was not able to ‘learn’ how the individual ‘touchers’ operated. In similar circumstances, people too would not be able to correctly recognize every single touch. In her follow-up research, which Jung is currently undertaking, she is concentrating on how robots can interpret touch in a social context. It is expected that robots, by interpreting the context, will be better able to respond to touch correctly, and that therefore the touch robot will be one step closer to reality.
Research
The research that has been published in Journal on Multimodal User Interfaces was performed by Merel Jung, Mannes Poel, and Dirk Heylen of the Human Media Interaction research group of the University of Twente CTIT research institute, and Ronald Poppe of Utrecht University. The research is being co-funded by the Dutch national ICT programme COMMIT.