“Neural network and embedded software designers are seeking practical ways to make developing machine learning for edge applications less frustrating and time-consuming,” said Ted Tewksbury, CEO of Eta Compute. “With TENSAI Flow, Eta Compute addresses every aspect of designing and building a machine learning application for IoT and low power edge devices. Now, designers can optimize neural networks by reducing memory size, the number of operations, and power consumption, and embedded software designers can reduce the complexities of adding AI to embedded edge devices, saving months of development time.”
The TENSAI Flow software quickly confirms a project’s feasibility and provides proof of concept. It features a neural network compiler, a neural network zoo, and middleware that includes FreeRTOS, HAL and frameworks for sensors, as well as IoT/cloud enablement.
“In order to best unlock the benefits of TinyML we need highly optimized hardware and algorithms. Eta Compute’s TENSAI provides an ideal combination of highly efficient ML hardware, coupled with an optimized neural network compiler,” says Zach Shelby, CEO of Edge Impulse. “Together with Edge Impulse and the TENSAI Sensor Board this is the best possible solution to achieve extremely low-power ML applications.”
The TENSAI Flow exclusive neural network compiler provides the best optimization for neural networks running on Eta Compute’s device, as well as low power usage. The middleware allows easy dual core programming by getting rid of the requirement of writing customized code to take full advantage of DSPs. The Neural Network Zoo quickens development with ready-to-use networks for the most common applications. These applications include motion, image and sound classification. This approach allows developers to just train networks with their own data. The insight gained from TENSAI Flow’s real world applications lets developers see the potential of neural sensor processors in terms of energy efficiency and performance in a variety of field tested examples.
In a comparison with direct implementation on a competitive device of the same CIFAR10 neural