MENU


Introduced late last year, SageMaker Neo lets developers train machine learning models once and run them anywhere in the cloud and at the edge. Now the company is releasing the code for SageMaker Neo as the open source Neo-AI project under the Apache Software License.

At its core, says the company, Neo-AI is a machine learning compiler and a runtime built on decades of research on traditional compiler technologies. Its release, says the company, enables processor vendors, device makers, and deep learning developers to rapidly bring new and independent innovations in machine learning to a wide variety of hardware platforms.

“Ordinarily, optimizing a machine learning model for multiple hardware platforms is difficult because developers need to tune models manually for each platform’s hardware and software configuration,” says the company in a post announcing the Neo-AI project. “This is especially challenging for edge devices, which tend to be constrained in compute power and storage. These constraints limit the size and complexity of the models that they can run.”

Differences in software add further complication. If the software on a device isn’t the same version as the model, the model will be incompatible with the device, further adding to the difficulty of quickly building, scaling, and maintaining machine learning applications.

“Neo-AI eliminates the time and effort needed to tune machine learning models for deployment on multiple platforms by automatically optimizing TensorFlow, MXNet, PyTorch, ONNX, and XGBoost models to perform at up to twice the speed of the original model with no loss in accuracy,” says the company. “Additionally, it converts models into an efficient common format to eliminate software compatibility problems. On the target platform, a compact runtime uses a small fraction of the resources that a framework would typically consume.”

Neo-AI allows sophisticated models to run on resource-constrained devices, including areas such as autonomous vehicles, home security, and anomaly detection. Neo-AI currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm coming soon.

By working with the Neo-AI project, says the company, processor vendors can quickly integrate their custom code into the compiler at the point at which it has the greatest effect on improving model performance. The project also enables device makers to customize the Neo-AI runtime for the particular software and hardware configuration of their devices.

The Neo-AI runtime is currently deployed on devices from ADLINK, Lenovo, Leopard Imaging, Panasonic, and others. The Neo-AI project, says the company, will absorb innovations from diverse sources into a common compiler and runtime for machine learning to deliver the best available performance for models.

Amazon AWS

Related articles:
Amazon launches new AI tech conference
Amazon AWS boosts IoT clout with slew of new services, products
Amazon scale-model autonomous car aims to jump-start devs with AI


Share:

Linked Articles
Smart2.0
10s