Posted in | News | Machine-Vision

Machine Learning Algorithm Helps Apply AI in Mobile Applications

One of the biggest obstacles for the wide use of artificial intelligence (AI)—particularly in mobile applications—is the high level of energy consumed by the learning activities of artificial neural networks.

Machine Learning Algorithm Helps Apply AI in Mobile Applications" />
TU Graz computer scientists Robert Legenstein and Wolfgang Maass (from left) are working on energy-efficient AI systems and are inspired by the functioning of the human brain. Image Credit: © Lunghammer—Graz University of Technology.

One way to resolve this issue can be drawn from the understanding of the human brain. While the brain has the computing power of a supercomputer, it only requires 20 W, which is just a millionth of the energy of a supercomputer.

One reason for this is the efficient data transfer that occurs between the neurons found in the brain. Neurons transmit short electrical impulses, or spikes, to other neurons, but it does so to save energy, and only as frequently as unequivocally necessary.

Event-Based Information Processing

Under the guidance of two computer researchers Wolfgang Maass and Robert Legenstein from the Graz University of Technology, a working group has implemented this principle to develop a novel machine-learning algorithm called e-propagation, or e-prop for short.

At the Institute of Theoretical Computer Science, which is also a part of the European lighthouse project Human Brain Project, scientists have used spikes in their model to allow neural communication in an artificial neural network. These spikes become active only when they are required for data processing in the network.

For these less active networks, learning is a specific challenge because it takes longer observations to find out which neuron connections enhance the performance of the network.

With regard to previous techniques, they either needed a large storage space or achieved very minimal learning success. At present, e-prop resolves this issue through a decentralized technique copied from the brain. In this method, every neuron records when its connections were utilized in a supposed eligibility trace or e-trace.

The technique is more or less as strong as the best and most detailed familiar learning techniques. The results of the study were recently published in Nature Communications, a scientific journal.

Online Instead of Offline

With several machine learning methods being used today, all network activities are saved centrally and offline to track every few steps as to how the connections were utilized at the time of calculations.

But this needs a continuous transfer of data between the processors and the memory—one of the primary reasons for the surplus energy consumption of present-day AI implementations. By contrast, e-prop operates fully online and eliminates the need for individual memory even in real operation, thereby rendering learning relatively energy efficient.

Driving Force for Neuromorphic Hardware

Both Maass and Legenstein believe that e-prop will fuel the growth of a new generation of mobile learning computing systems that do not have to be programmed but learn as per the human brain model and thus adjust to continuously changing needs.

The aim is to make sure that such computing systems no longer consume energy intensively and solely through a cloud, but efficiently incorporate the better part of the learning potential into mobile hardware components and therefore save energy.

Researchers have already made the initial steps to bring e-prop into the application. For instance, the research group from the Graz University of Technology has teamed up with the Advanced Processor Technologies Research Group (APT) from the University of Manchester in the Human Brain Project to incorporate e-prop into the neuromorphic SpiNNaker system, which has been designed there.

The Graz University of Technology is also working with scientists from the semiconductor manufacturer Intel to incorporate the algorithm into the subsequent version of neuromorphic chip Loihi of Intel.

The study is anchored in the Fields of Expertise “Human and biotechnology” and “Information, Communication & Computing”—two of the five Fields of Expertise of the Graz University of Technology.

Journal Reference:

Bellec, G., et al. (2020) A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications. doi.org/10.1038/s41467-020-17236-y.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.