Posted in | News | Machine-Vision

New Model may VR/AR Systems More Realistic and Sensitive to User Actions

The tracking of eye movement is one of the key elements of virtual and amplified reality technologies (VR/AR). A team from MSU together with a professor from RUDN University developed a mathematical model that helps accurately predict the next gaze fixation point and reduces the inaccuracy caused by blinking.

The model would make VR/AR systems more realistic and sensitive to user actions. The results of the study were published in the SID Symposium Digest of Technical Papers.

Foveated rendering is a basic technology of VR systems. When a person looks at something, their gaze is focused on the so-called foveated region, and everything else is covered by peripheral vision. Therefore, a computer has to render the images in the foveated region with the highest degree of detail, while other parts require less computational powers.

This approach helps improve computational performance and eliminates issues caused by the gap between the limited capabilities of graphic processors and increasing display resolution. However, foveated rendering technology is limited in speed and accuracy of the next gaze fixation point prediction because the movement of a human eye is a complex and largely random process.

To solve this issue, a team of researchers from MSU together with a professor from RUDN University developed a mathematical modeling method that helps calculate next gaze fixation points in advance.

"One of the issues with foveated rendering is timely prediction of the next gaze fixation point because vision is a complex stochastic process. We suggested a mathematical model that predicts gaze fixation point changes," said Prof. Viktor Belyaev, a Ph.D. in Technical Sciences from the Department of Mechanics and Mechatronics of RUDN University.

The predictions of the model are based on the study of the so-called saccadic movements (fast and rhythmic movements of the eye). They accompany the shifts of our gaze from one object to another and can suggest the next fixation point.

The ratio between the length, range, and maximum speed of saccadic eye movements is determined by certain empirical regularities. However, these models cannot be used by eye trackers to predict eye movements because they are not accurate enough.

Therefore, the researchers focused on a mathematical model that helped them obtain saccadic movement parameters. After that, this data was used to calculate the foveated region of an image.

The new method was tested experimentally using a VR helmet and AR glasses. The eye tracker based on the mathematical model was able to detect minor eye movements (3.4 minutes, which is equal to 0.05 degrees), and the inaccuracy amounted to 6.7 minutes (0.11 degrees).

Moreover, the team managed to eliminate the calculation error caused by blinking: a filter included in the model reduced the inaccuracy 10 times.

The results of the work could be used in VR modeling, video games, and in medicine for surgeries and vision disorders diagnostics.

"We have effectively solved the issue with the foveated rendering technology that existed in the mass production of VR systems. In the future, we plan to calibrate our eye tracker to reduce the impact of display or helmet movements against a user's head," added Prof. Viktor Belyaev from RUDN University.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.