Experts from the University of Essex have discovered a radical new framework for improving artificial intelligence (AI) in the coming years.
The Essex team hopes that their research will serve as a foundation for the next generation of AI and machine learning advancements. The study was published in the top machine learning journal, Journal of Machine Learning Research.
This could lead to advancements in everything from driverless cars and smartphones that understand voice commands to stronger automatic medical diagnoses and drug development.
Artificial intelligence research ultimately has the goal of producing completely autonomous and intelligent machines which we can converse with and will do tasks for us, and this new published work accelerates our progress towards that.
Dr. Michael Fairbank, Study Co-Lead Author, School of Computer Science and Electronic Engineering, University of Essex
“Deep learning” — which involves training multi-layered artificial neural networks to fix a task — has been used in the latest remarkable advancements in AI around vision tasks, voice recognition and translation software. However, training these deep neural networks is a computationally expensive challenge that necessitates a large number of training examples as well as computation time.
The Essex team, which includes Professor Luca Citi and Dr. Spyros Samothrakis, has devised a completely different approach to deep learning neural network coaching.
Our new method, which we call Target Space, provides researchers with a step change in the way they can improve and build their AI creations. Target Space is a paradigm-changing view, which turns the training process of these deep neural networks on its head, ultimately enabling the current progress in AI developments to happen faster.
Dr. Michael Fairbank, Study Co-lead Author, School of Computer Science and Electronic Engineering, University of Essex
The standard method for training neural networks to boost efficiency is to make small adjustments to the connection strengths between the neurons in the network on a regular basis. The Essex team has adopted a new strategy. Instead of adjusting the strength of connections between neurons, the new “target-space” technique proposes adjusting the firing strengths of the neurons themselves.
This new method stabilizes the learning process considerably, by a process which we call cascade untangling. This allows the neural networks being trained to be deeper, and therefore more capable, and at the same time potentially requiring fewer training examples and less computing resources. We hope this work will provide a backbone for the next generation of artificial intelligence and machine-learning breakthroughs.
Luca Citi, Professor, University of Essex
The method will be applied to a variety of new academic and industrial applications in the coming months.
Journal Reference:
Fairbank, M., et al. (2022) Deep Learning in Target Space. Journal of Machine Learning Research. Available at: https://jmlr.org/papers/v23/20-040.html.