Efficient Deep Learning for Spiking Neural Networks

December 14, 2022

Bojian Yin defended his PhD thesis at the department of Electrical Engineering on December 14th.

Source: iStockphoto.

Running AI models on battery-based mobile devices consumes a lot of energy, meaning that more energy efficient AI systems are needed. One way is to create a neural network inspired by the biological neural networks of the human brain, which are known to be energy efficient. To achieve this though, spiking neural networks are required, but these are difficult to mathematically evaluate. For his PhD research, Bojian Yin developed methods to overcome these mathematical difficulties, while at the same time achieving efficient, scalable and high-performance spiking neural networks.

Over the past decade, AI has become a cornerstone of many daily applications such as image recognition and language processing. The main functionality of these applications is achieved through so-called deep artificial neural networks deep artificial neural networks (ANNs).

Traditional ANNs use signals that are continuous and mathematically easy to process. On the contrary, neurons in the brain only communicate with each other sparingly using binary electrical pulses, otherwise known as spikes.

A spike solution

To achieve the energy efficiency of the brain in AI systems, it has been proposed to use spiking neural networks (SNNs) that are based on sparse binary pulses. The main drawback of spike-based communication however is that the signals are discontinuous and, therefore, more difficult to handle mathematically.

For his PhD research, Bojian Yin worked on new ways to deal with the mathematical difficulty. The new ways pave the way towards efficient, scalable and high-performance spiking neural networks.

Tunable spikes

In one study, Yin and his colleagues trained shallow neural networks with tunable spiking neuron parameters based on traditional training methods to identify heart defects, speech, and gestures in signals. The resulting spiking neural networks were twenty to one thousand times more energy efficient on hardware implementations – such as neuromorphic chips – in comparison to traditional ANNs. This shows that SNNs can be a compelling proposition for many edge AI applications in the form of neuromorphic computing like wearable and mobile applications.

Reducing training memory consumption

Yin and his colleagues also developed novel spiking neuron models and applied the latest research in efficient online learning to reduce the training memory consumption of SNNs, and as a result, allow for the precise training on deeper and more complex networks over longer sequences.

Moreover, Yin showed that this learning algorithm permits the optimization of networks composed of detailed and biologically plausible neuron models. Based on these advantages, Yin and the team trained the first large SNN based on the YOLO (You Look Only Once) structure and applied it to a more complex task than simple classification, in this case object recognition.

Over the coming years, efficient SNNs will be deployed to a greater extent on neuromorphic chips. This will greatly expand the availability of artificial intelligence to wearable devices.

Title of PhD thesis: Efficient and accurate spiking neural networks. Supervisors: Sander Bohté (External), Henk Corporaal, and Federico Corradi. Other parties: CWI (Centrum Wiskunde & Informatica), Imec.

Media contact

Barry Fitzgerald
(Science Information Officer)

Latest news

Keep following us