The research offers a radical rethink of how to imp


Computer scientists at the University of Essex have developed a radically different approach to improving artificial intelligence (AI) in the future.

Published in the top machine learning journal – the Journal of Machine Learning Research – The Essex team hopes this research will provide a backbone for the next generation of AI and machine learning breakthroughs.

This could lead to improvements in everything from driverless cars and smartphones with a better understanding of voice commands to improved automated medical diagnostics and drug discovery.

“Artificial intelligence research ultimately aims to produce fully autonomous and intelligent machines that we can talk to and do the tasks for us, and this newly published work accelerates our progress in that direction,” said co-lead author Dr . Michael Fairbank from the Essex Faculty of Computer Science and Electrical Engineering.

The recent impressive breakthroughs in AI around visual tasks, speech recognition and language translation involve “deep learning”, which means that multi-layered artificial neural networks are trained to solve a task. However, training these deep neural networks is a computationally intensive task that requires enormous amounts of training examples and computation time.

As for the Essex team, which includes Professor Luca Citi and Dr. Spyros Samothrakis, is the development of a radically different approach to deep learning neural network training.

“Our new method, which we call Target Space, offers researchers a crucial step in the way they can improve and build their AI creations,” added Dr. Fair Bank added. “Target Space is a paradigm-shifting view that will turn the training process of these deep neural networks on its head, ultimately enabling current advances in AI development to happen faster.”

Normally, people train neural networks to improve performance by repeatedly making tiny adjustments to the connection strengths between the neurons in the network. The Essex team have taken a new approach. Instead of optimizing the connection strengths between neurons, the new “target-space” method proposes to optimize the firing strengths of the neurons themselves.

Professor Citi added: “This new method significantly stabilizes the learning process through a process we call cascade untangling. This allows the neural networks to be trained to be deeper and therefore more powerful, while potentially requiring fewer training examples and fewer computational resources. We hope this work will provide a backbone for the next generation of breakthroughs in artificial intelligence and machine learning.”

The next steps for research are to apply the method to various new academic and industrial applications.


Note to the editor

For further information please contact the University of Essex Communications Office by email at [email protected]

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of the press releases published on EurekAlert! by contributing institutions or for the use of information about the EurekAlert system.


Comments are closed.