Part of the Microchip Smart Embedded Vision initiative, VectorBlox Accelerator SDK will help to provide power-efficient inferencing in edge applications by providing an easier route for software developers to implement algorithms in the company’s PolarFire FPGAs. The software provides the tools for developers to create low-powered and flexible neural network applications on Microchip’s PolarFire FPGAs without learning the FPGA tool flow. Instead, developers can implement power-efficient neural networks in C/C++.
The new tool kit can execute models in TensorFlow and the open neural network exchange (ONNX) format. ONNX has the ability to support frameworks including Caffe2, MXNet, PyTorch, and MATLAB. The VectorBlox Accelerator SDK is supported on Linux and Windows operating systems, and it also includes a bit accurate simulator to validate hardware accuracy while in the software environment. The neural network IP included with the kit allows the loading of different network models at run time.
PolarFire FPGAs can provide up to 50 percent lower total power than competing devices while offering 25 percent higher-capacity math blocks that can deliver up to 1.5 TOPS. FPGAs also have the flexibility to offer customization and differentiation through their inherent upgradability. They can also host many functions on a single chip.