AI edge processor for low-power deep learning

March 17, 2020 //By Julien Happich
AI chip
Socionext announced it has developed a prototype chip that incorporates newly-developed quantized Deep Neural Network (DNN) technology, enabling highly-advanced AI processing for small and low-power edge computing devices.

The prototype is part of a research project on “Updatable and Low Power AI-Edge LSI Technology Development” commissioned by the New Energy and Industrial Technology Development Organization (NEDO) of Japan.
Today’s edge computing devices are based on conventional, general-purpose GPUs. These processors are not generally capable of supporting the growing demand for AI-based processing requirements, such as image recognition and analysis, which need larger devices at higher cost due to increases in power consumption and heat generation. Such devices and their limited performance are not desirable for state-of-the-art AI processing.

In their place, Socionext has developed a proprietary architecture based on "quantized DNN technology" for reducing the parameter and activation bits required for deep learning. The result is improved performance of AI processing along with lower power consumption. The architecture incorporates bit reduction including 1-bit (binary) and 2-bit (ternary) in addition to the conventional 8-bit, as well as the company’s original parameter compression technology, enabling a large amount of computation with fewer resources and significantly less amounts of data.

In addition, Socionext says it has developed a novel on-chip memory technology that provides highly efficient data delivery, reducing the need for extensive large capacity on-chip or external memory typically required for deep learning.

These new technologies were integrated in the prototype AI chip which is reported to achieve object detection by “YOLO v3” at 30fps, while consuming less than 5W of power. This is 10 times more efficient than conventional, general-purpose GPUs, claims the company. The chip is also equipped with a high-performance, low-power Arm Cortex-A53 quad-core CPU. Unlike other “accelerator” chips, it can perform the entire AI processing without external processors.


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.