Posted in | News | Automotive Robotics

New Method can Help AI Find Safer Options More Quickly

According to a new study, a novel approach to reason about uncertainty may help artificial (AI) intelligence to identify safer options more quickly, for instance, in autonomous cars.

The study, which will soon be published in the AAAI journal, involved scientists from Radboud University, the University of Austin, the University of California, Berkeley, and the Eindhoven University of Technology.

The scientists have defined a novel method to the supposed “uncertain partially observable Markov decision processes,” or uPOMDPs for short. To put this in simple terms, these are real-world models that predict the likelihood of events.

For instance, an autonomous car will encounter several unfamiliar scenarios when it begins to drive. To verify the artificial intelligence of autonomous cars, elaborate calculations are run to find out how the AI would handle different situations.

According to the investigators, their new method can make these modeling exercises more realistic, and thus enable AI to make safer and better decisions more quickly.

Making the Theoretical Real

Scientists are already using POMDPs to replicate and model several situations. POMDPs can allow scientists to calculate how spacecraft and aircraft avoid collision and to estimate the spread of an epidemic. They can even be utilized to review and protect endangered species.

We know that these models are very good at providing a realistic capture of the real world. However, the high amount of processing power needed to use them means their use in practical applications is often still limited. This new approach allows us to take all our calculations and theoretical information and use it in the real world on a more consistent, regular basis.

Nils Jansen, Study Main Author and Assistant Professor, Radboud University

Self-Driving Cars

The researchers’ breakthrough appears clearly, including the ambiguity of the real world into the models.

For example, current models might just tell you that there is an 80 percent chance that a drive in a self-driving car will be fully safe. It’s unclear what might happen in the other 20 percent, and what type of risk can be expected.

Nils Jansen, Study Main Author and Assistant Professor, Radboud University

Jansen added, “That is an unclear and vague approximation of risk. With this new approach, a system could give far more detailed explanations of what could go wrong and take those into account when making calculations. For users, this means they have more specific examples of what could go wrong, and make better and more adequate adjustments to avoid those specific risks.”

While other scientists have previously considered the new method adopted by the researchers toward these uPOMPDPs, they did so only in particular thought experiments and limited situations.

However, for the first time we have been able to take these previous theoretical thought experiments into a practical and realistic approach. It was considered a unique, difficult problem, but thanks to an interdisciplinary approach we were able to make real breakthroughs.

Nils Jansen, Study Main Author and Assistant Professor, Radboud University

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.