Posted in | News | Consumer Robotics

Researchers Receive Grant to Make Artificial-Intelligence-Based Systems More Autonomous

Eight computer science professors in Oregon State University’s College of Engineering have received a $6.5 million grant from the Defense Advanced Research Projects Agency to make artificial-intelligence-based systems like autonomous vehicles and robots more trustworthy.

The success of the deep neural networks branch of artificial intelligence has enabled significant advances in autonomous systems that can perceive, learn, decide and act on their own.

The problem is that the neural networks function as a black box. Instead of humans explicitly coding system behavior using traditional programming, in deep learning the computer program learns on its own from many examples. Potential dangers arise from depending on a system that not even the system developers fully understand.

The four-year grant from DARPA will support the development of a paradigm to look inside that black box, by getting the program to explain to humans how decisions were reached.

“Ultimately, we want these explanations to be very natural – translating these deep network decisions into sentences and visualizations,” said Alan Fern, principal investigator for the grant and associate director of the College of Engineering’s recently established Collaborative Robotics and Intelligent Systems Institute.

Developing such a system that communicates well with humans requires expertise in a number of research fields. In addition to having researchers in artificial intelligence and machine learning, the team includes experts in computer vision, human-computer interaction, natural language processing, and programming languages.

To begin developing the system, the researchers will use real-time strategy games, like StarCraft, to train artificial-intelligence “players” that will explain their decisions to humans. StarCraft is a staple of competitive electronic gaming.

Later stages of the project will move on to applications provided by DARPA that may include robotics and unmanned aerial vehicles.

Fern said the research is crucial to the advancement of autonomous and semi-autonomous intelligent systems.

“Nobody is going to use these emerging technologies for critical applications until we are able to build some level of trust, and having an explanation capability is one important way of building trust,” he said.

The researchers from Oregon State were selected by DARPA for funding under the highly competitive Explainable Artificial Intelligence program. Other major universities chosen include Carnegie Mellon, Georgia Tech, Massachusetts Institute of Technology, Stanford, Texas and University of California, Berkeley.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.