Jun 15 2016
The latest advances in developing artificial systems that are capable of outplaying humans in a wide range of challenging games have their origins in neural networks, which have been inspired by the processing of details in the brain.
On June 14, 2016, Trends in Cognitive Sciences published a review in which researchers from Stanford University and Google DeepMind provided an update on a theory initially developed to describe the learning process of humans and animals. The researchers highlighted the significance of this theory as a framework that will help in the establishment of agents containing artificial intelligence.
This theory, initially published in 1995 (Psychol Rev., 102(3):419-57), states that learning is a product of two corresponding learning systems. The first system gradually obtains skills and knowledge from the exposure to numerous experiences, and the second learning system stores only particular experiences that can be replayed, permitting all of these specific experiences to efficiently integrate into the first learning system. The paper is built on the latest inventions in neural network learning methods and also on a previous theory developed by David Marr, a British computational neuroscientist.
The evidence seems compelling that the brain has these two kinds of learning systems, and the complementary learning systems theory explains how they complement each other to provide a powerful solution to a key learning problem that faces the brain.
James McClelland, Professor of Psychology, Stanford University
The proposed theory’s first system, which is placed in the brain’s neocortex, was inspired by predecessors of the existing deep neural networks. These existing networks have numerous layers of neurons between output and input, and the knowledge available in these networks is found to be in their connections.
All of these connections are constantly programmed based on experience, enhancing their ability to identify objects, recognize speech, comprehend and develop language, and also to choose favorable actions in game-playing and various other settings in which intelligent action is based on the knowledge that is obtained.
These systems experience a dilemma when new information has to be studied: If major changes are performed on the connections to forcefully transfer the new knowledge into the connections in a rapid manner, it will result in radically altering all the knowledge that has already been stored in the connections.
"That's where the complementary learning system comes in," McClelland says. This second system is placed in a structure known as the hippocampus in animals and various other mammals.
By initially storing information about the new experience in the hippocampus, we make it available for immediate use and we also keep it around so that it can be replayed back to the cortex, interleaving it with ongoing experience and stored information from other relevant experiences.
James McClelland, Professor of Psychology, Stanford University
This two-system set-up thus permits both instant learning and steady integration into the structured knowledge representation in the neocortex.
"Components of the neural network architecture that succeeded in achieving human-level performance in a variety of computer games like Space Invaders and Breakout were inspired by complementary learning systems theory" says DeepMind cognitive neuroscientist Dharshan Kumaran, the first author of the Review. "As in the theory, these neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of game play and replays them in interleaved fashion. This greatly amplifies the use of actual game play experience and avoids the tendency for a particular local run of experience to dominate learning in the system."
Kumaran has partnered both with McClelland and Demis Hassabis, a co-founder of DeepMind and also a co-author on the Review, in a work that concentrated a lot on the role of the hippocampus as it was visualized in the 1995 version of the complementary learning systems theory.
"In my view," says Hassabis, "the extended version of the complementary learning systems theory is likely to continue to provide a framework for future research, not only in neuroscience but also in the quest to develop Artificial General Intelligence, our goal at Google DeepMind."