Ayar Labs, Nvidia team on next-gen AI compute architectures

Ayar Labs, Nvidia team on next-gen AI compute architectures
Business news |
Chip-to-chip optical connectivity specialist Ayar Labs has announced that it is collaborating with AI hardware and software developer Nvidia on developing a "groundbreaking" AI infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads.
By Rich Pell

Share:

The collaboration will focus on integrating Ayar Labs’ technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future Nvidia products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions, say the companies. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

“Today’s state-of-the-art AI/ML training architectures are limited by current copper-based compute-to-compute interconnects to build scale-out systems for tomorrow’s requirements,” says Charles Wuischpard, CEO of Ayar Labs. “Our work with Nvidia to develop next-generation solutions based on optical I/O provides the foundation for the next leap in AI capabilities to address the world’s most sophisticated problems.”

Rob Ober, Chief Platform Architect for Data Center Products at Nvidia adds, “Over the past decade, Nvidia-accelerated computing has delivered a million-X speedup in AI. The next million-X will require new, advanced technologies like optical I/O to support the bandwidth, power and scale requirements of future AI and ML workloads and system architectures.”

As AI model sizes continue to grow, by 2023 Nvidia says it believes that models will have 100 trillion or more connections – a 600X increase from 2021 – exceeding the technical capabilities of existing platforms. Traditional electrical-based interconnects will reach their bandwidth limits, driving lower application performance, higher latency and increased power consumption.

New interconnect solutions and system architectures are needed to address the scale, performance and power demands of the next generation of AI. Ayar Labs’ collaboration with Nvidia is focused on addressing these future challenges by developing next-generation architectures with optical I/O.

Ayar Labs
Nvidia

Linked Articles
Smart2.0
10s