AI-on-modules speed creation of edge systems

February 07, 2020 //By Ally Winning
ADLINK Technology will launch a new range of modules for artificial intelligence at the edge.
ADLINK Technology will launch a new range of modules for artificial intelligence at the edge, allowing the quick creation of Edge AI systems.

The AIoM products were created with NVIDIA and Intel. The series offers a hardware optimization strategy to help developers address performance, and SWaP requirements.

Heterogeneous computing is supported by the integration of one or more types of processing cores.

ADLINK’s AIoM offering includes:

  • Mobile PCI Express Module (MXM) GPU modules: The MXM GPU modules feature NVIDIA Quadro Embedded GPUs based on the Turing and Pascal architectures.

  • VPU-accelerated SMARC modules: Vizi-AI and Neuron Pi equipped with Intel Movidius Myriad X VPU. These enable developers to quicken the prototyping process. Commercially available options include tight version control and longevity support.

  • VPU-accelerated COM Express modules: High-performance modules that quickly integrate AI.

Additional form factors include PC/104, VPX, CompactPCI and XMC. Standards like USB3 Vision and GigE Vision are also supported.

The company will show the range at embedded world in a series of demonstrations that include an access control powered by ADLINK’s MXM GPU module and DLAP-3000-CFL platform, an inspection robot based on NVIDIA Jetson TX2, and a Vizi-AI development kit that simplifies scaling to other ADLINK AI products.

The integrated hardware/software approach provides flexibility. Developers can start on the low-cost Vizi-AI and choose a processor (e.g. CPU, GPU, VPU, TPU, NPU) at the time of deployment.

www.adlink.com


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.