New Approach Found for Energy-Efficient AI Applications

Researchers at TU Graz demonstrate a new design method for particularly energy-saving artificial neural networks that get by with extremely few signals and – similar to Morse code – also assign meaning to the pauses between the signals.

Computer-Chip

The algorithm will be implemented on brain-inspired computing systems, like the spike-based SpiNNaker (pictured here). SpiNNaker is part of the Human Brain Project's EBRAINS research infrastructure. © Forschungszentrum Jülich

Most new achievements in artificial intelligence (AI) require very large neural networks. They consist of hundreds of millions of neurons arranged in several hundred layers, i.e. they have very "deep" network structures. These large, deep neural networks consume a lot of energy in the computer. Those neural networks that are used in image classification (e.g. face and object recognition) are particularly energy-intensive, since they have to send very many numerical values from one neuron layer to the next with great accuracy in each time cycle.  

Computer scientist Wolfgang Maass, together with his PhD student Christoph Stöckl, has now found a design method for artificial neural networks that paves the way for energy-efficient high-performance AI hardware (e.g. chips for driver assistance systems, smartphones and other mobile devices). The two researchers from the Institute of Theoretical Computer Science at Graz University of Technology (TU Graz) have optimized artificial neuronal networks in computer simulations for image classification in such a way that the neurons – similar to neurons in the brain – only need to send out signals relatively rarely and those that they do are very simple. The proven classification accuracy of images with this design is nevertheless very close to the current state of the art of current image classification tools.

Information processing in the human brain as a paradigm

Maass and Stöckl were inspired by the way the human brain works. It processes several trillion computing operations per second, but only requires about 20 watts. This low energy consumption is made possible by inter-neuronal communication by means of very simple electrical impulses, so-called spikes. The information is thereby encoded not only by the number of spikes, but also by their time-varying patterns. "You can think of it like Morse code. The pauses between the signals also transmit information," Maass explains.

Conversion method for trained artificial neural networks

That spike-based hardware can reduce the energy consumption of neural network applications is not new. However, so far this could not be realized for the very deep and large neural networks that are needed for really good image classification.

In the design method of Maass and Stöckl, the transmission of information now depends not only on how many spikes a neuron sends out, but also on when the neuron sends out these spikes. The time or the temporal intervals between the spikes practically encode themselves and can therefore transmit a great deal of additional information. "We show that with just a few spikes – an average of two in our simulations – as much information can be conveyed between processors as in more energy-intensive hardware," Maass said.

With their results, the two computer scientists from TU Graz provide a new approach for hardware that combines few spikes and thus low energy consumption with state-of-the-art performances of AI applications. The findings could dramatically accelerate the development of energy-efficient AI applications and are described in the journal Nature Machine Intelligence.  

This research work is anchored in the Fields of Expertise "Human and Biotechnology" and "Information, Communication & Computing", two of the five Fields of Expertise of TU Graz. It was funded by the European Human Brain Project, which combines neuroscience, medicine and the development of brain-inspired technologies.

The researchers at the Institute for the Theoretical Computer Science have also recently attracted attention with other research successes on a new learning algorithm and a biological programming language.

Publication in Nature Machine Intelligence

Optimized spiking neurons can classify images with high accuracy through temporal coding with two spikes.
C. Stoeckl and W. Maass.
DOI: 10.1038/s42256-021-00311-4

Contacts

TU Graz | Institute of Theoretical Computer Science
Wolfgang MAASS
Em.Univ.-Prof. Dipl.-Ing. Dr.rer.nat.
Phone +43 316 873 5822
maass@igi.tugraz.at

Christoph STÖCKL
Dipl.-Ing. BSc
Phone +43 316 873 5847
stoeckl@tugraz.at