A face could unlock a smartphone, offer entry to a secure building and speed up passport control at airports, validating an individual’s identity for several purposes.
A global research team from Australia, New Zealand and India took facial recognition technology to the next level with the use of a person’s expression to influence objects in a virtual reality setting without using a touchpad or handheld controller.
In a study headed by Dr. Arindam Dey, a researcher from the University of Queensland, experts of human-computer interaction used neural processing methods to capture a person’s frown, smile and clenched jaw. Each expression was, then, triggered to perform specific actions in virtual reality environments.
Professor Mark Billinghurst from the University of South Australia, one of the scientists involved in the experiment, states that the system has been developed to recognize various facial expressions through an EEG headset.
A smile was used to trigger the ‘move’ command; a frown for the ‘stop’ command and a clench for the ‘action’ command, in place of a handheld controller performing these actions. Essentially we are capturing common facial expressions such as anger, happiness, and surprise and implementing them in a virtual reality environment.
Mark Billinghurst, Professor, University of South Australia
Happy, neutral and scary are the three virtual environments designed by the researchers. They measured each person’s intellectual and physical state when they were engrossed in each scenario.
By reproducing a smile, frown and clench — the three universal facial expressions — the researchers explored if environmental changes activated one of the three expressions, depending on physiological and emotional responses.
For instance, in the happy environment, users were given a task to stroll through a park and simultaneously catch butterflies using a net. The user stopped when they frowned and moved when they smiled.
In the neutral environment, users were given the task of navigating through a workshop to pick up items scattered across. The clenched jaw activated an action, for example picking up each object in this case. Whereas, the start and stop movement orders were triggered with a smile and frown.
Similar facial expressions were used in the scary setting, where participants navigated through an underground base to shoot zombies.
Overall, we expected the handheld controllers to perform better as they are a more intuitive method than facial expressions, however people reported feeling more immersed in the VR experiences controlled by facial expressions.
Mark Billinghurst, Professor, University of South Australia
He adds that depending on facial expressions in a virtual reality setting is a challenging task for the brain, however, it gives users a more real-like experience.
“Hopefully with some more research we can make it more user friendly,” he states.
Alongside providing a new way to use virtual reality, the method will also enable disabled people — with amputees or those with motor neuron disease — to interact hands-free in a virtual reality setting, without needing to use controllers that are developed for abled people.
Scientists say the technology can also be employed to complement handheld controllers as facial expressions are an enhanced natural form of interaction.
Journal Reference:
Dey, A., et al (2021), Effects of Interacting with Facial Expressions and Controllers in Different Virtual Environments on Presence, Usability, Affect, and Neurophysiological Signals. International Journal of Human-Computer Studies. doi.org/10.1016/j.ijhcs.2021.102762.