New neural network training approach cuts energy use, time

New neural network training approach cuts energy use, time

Technology News |
Researchers at the University of California San Diego have developed a hardware/software co-design approach that could make neural network training more energy efficient and faster.
By Rich Pell

Share:

Their neuroinspired approach, say the researchers, could one day make it possible to train neural networks on low-power devices such as smartphones, laptops, and embedded devices. Currently, training neural networks to perform tasks like object recognition, autonomous navigation, or game playing requires large computers with hundreds to thousands of processors and weeks to months of training times.

The reason, say the researchers, is that such computations involve transferring data back and forth between two separate units — the memory and the processor — and this consumes most of the energy and time during neural network training. To address this, the researchers teamed up with ultralow-power embedded non-volatile memory technology company Adesto Technologies to develop hardware and algorithms that allow these computations to be performed directly in the memory unit – eliminating the need to repeatedly shuffle data.

“We are tackling this problem from two ends — the device and the algorithms — to maximize energy efficiency during neural network training,” says Yuhan Shi, an electrical engineering Ph.D. student at UC San Diego and first author of a paper describing the research.

The hardware component of the approach consists of a super energy-efficient type of non-volatile memory technology — a 512-kilobit subquantum Conductive Bridging RAM (CBRAM) array. Based on Adesto’s CBRAM memory technology, it consumes 10 to 100 times less energy than today’s leading memory technologies.

However, instead of using it as a digital storage device that only has ‘0’ and ‘1’ states, the researchers demonstrated that it can be programmed to have multiple analog states to emulate biological synapses in the human brain. As a result, say the researchers, the so-called “synaptic device” can be used to do in-memory computing for neural network training.

“On-chip memory in conventional processors is very limited, so they don’t have enough capacity to perform both computing and storage on the same chip,” says Duygu Kuzum, a professor of electrical and computer engineering at the Jacobs School of Engineering at UC San Diego and senior author of the paper. “But in this approach, we have a high-capacity memory array that can do computation related to neural network training in the memory without data transfer to an external processor. This will enable a lot of performance gains and reduce energy consumption during training.”

Kuzum, who is also affiliated with the Center for Machine-Integrated Computing and Security at UC San Diego, led efforts to develop algorithms that could be easily mapped onto the synaptic device array. The algorithms, says Kuzum, provided even more energy and time savings during neural network training.

The researchers’ approach uses a type of energy-efficient neural network – called a spiking neural network – for implementing unsupervised learning in the hardware. In addition, say the researchers, they applied another energy-saving algorithm they developed called “soft-pruning,” which makes neural network training much more energy efficient without sacrificing much in terms of accuracy.

The researchers implemented the neuroinspired unsupervised spiking neural network and the soft-pruning algorithm on the subquantum CBRAM synaptic device array, and then trained the network to classify handwritten digits from the MNIST database. In tests, say the researchers, the network classified digits with 93% accuracy.

In terms of energy savings, the researchers estimate that their neuroinspired hardware/software co-design approach can eventually cut energy use during neural network training by two to three orders of magnitude compared to the state of the art.

“If we benchmark the new hardware to other similar memory technologies,” says Kuzum, “we estimate our device can cut energy consumption 10 to 100 times, then our algorithm co-design cuts that by another 10. Overall, we can expect a gain of a hundred to a thousand fold in terms of energy consumption following our approach.”

Looking ahead, the researchers plan to work with memory technology companies to advance their work to the next stages. Their ultimate goal, they say, is to develop a complete system in which neural networks can be trained in memory to do more complex tasks with very low power and time budgets.

For more, see “Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays.

Related articles:
Resistive RAM EEPROM improves IoT security
Spiking neural network SoC launched
Accelerating development of 3rd-gen neural networks
Convolutional neural networks: What’s next?

Linked Articles
Smart2.0
10s