The new CrossLink-NX-17 FPGA, with 17K logic cells, is the second device in the company's CrossLink-NX family of embedded vision and processing FPGAs. The company says the family is aimed at helping developers meet the growing demand for embedded and smart vision applications like video signal bridging, aggregation and splitting, image processing, and the AI/ML inferencing used to train smart vision models.
"Lattice is a leading provider of innovative, low power solutions for smart and embedded vision applications," says Peiju Chiang, Product Marketing Manager at Lattice. "With the CrossLink-NX-17, Lattice gives developers one more hardware power and performance option to choose from as they design their vision systems. Our award-winning mVision solutions stack can further accelerate and simplify vision system development by providing modular hardware development boards, featuring Lattice FPGAs like the CrossLink-NX, our Radiant 2.1 design software, embedded vision IP, and reference designs needed to implement popular embedded vision applications.”
Key features of the CrossLink-NX-17 include:
- Low power - built on the Lattice Nexus FPGA platform, CrossLink-NX provides up to a 75 percent reduction in power consumption compared to competing FPGAs of a similar class.
- High reliability - CrossLink-NX has a Soft Error Rate (SER) up to 100 times lower than similar FPGAs in its class, making it a compelling solution for mission-critical applications that must operate safely and reliably. The initial CrossLink-NX device is designed to support ruggedized environments found in outdoor, industrial, and automotive applications.
- Performance - CrossLink-NX-17 delivers enhanced performance enabled by three key elements:
- Fast I/O support - CrossLink-NX-17 FPGAs are well suited for embedded vision applications thanks to support for multiple fast I/Os, including MIPI.
- Instant-on performance – to better support applications where a long system boot time is unacceptable, such as industrial motor control, CrossLink-NX-17 enables ultra-fast I/O configuration in 3 ms and total device configuration in less than 10 ms.
- High memory-to-logic ratio - to efficiently power AI inferencing in Edge devices,