Microsoft previews real-time AI platform for Azure

May 16, 2018 // By Rich Pell
Microsoft (Redmond, WA) has announced the preview launch of its hardware-based Project Brainwave platform for running deep learning models in the Azure cloud and on the edge in real time.

First unveiled last year, Project Brainwave is a hardware architecture designed to accelerate real-time AI calculations. It comprises a high-performance distributed system architecture, a hardware deep neural networking (DNN) engine synthesized onto Intel field-programmable gate arrays (FPGAs), and a software stack for low-friction deployment of trained models.

The just-announced preview of Project Brainwave is integrated with Azure Machine Learning, which, says the company, will make Azure the most efficient cloud computing platform for AI, and marks the start of the company's efforts to bring the power of FPGAs to customers for a variety of purposes.

"I think this is a first step in making the FPGAs more of a general-purpose platform for customers," says Mark Russinovich, chief technical officer for Microsoft's Azure cloud computing platform.

The Project Brainwave preview includes the ability for customers to do ultra-fast image recognition, and lets users do AI-based computations in real time, instead of batching them into smaller groups of separate computations. It currently works on TensorFlow, a popular open-source machine learning framework, and the company is working on building the capability to support the Microsoft Cognitive Toolkit .

Microsoft also announced a limited preview to bring Project Brainwave to the edge - enabling customers to take advantage of its computing speed in their own businesses and facilities, even if their systems aren't connected to a network or the Internet.

"We're making real-time AI available to customers both on the cloud and on the edge," says Doug Burger, a distinguished engineer at Microsoft who leads the group that has pioneered the idea of using FPGAs for AI work.

Project Brainwave is perfect for the demands of AI computing, says Burger. Its hardware design can evolve rapidly and be remapped to the FPGA after each improvement, keeping pace with new discoveries and staying current with the requirements of rapidly changing AI algorithms. FPGAs can also quickly be reprogrammed to respond to new advances in


s