As an engineer, you’re familiar with the promise of artificial intelligence. You use it on your phone for voice commands and help from Siri or Google. You use it on your computer for predictive typing assistance. You read about its promise every day. In fact, you see the potential to leverage AI – and one of its key applications, machine learning – to tap vast amounts of Internet of Things (IoT) endpoint data to transform businesses fundamentally and deliver Industry 4.0. But you’ve been stymied trying to easily port those types of capabilities into your IoT and embedded systems.
It can be mystifying trying to figure out what processors are the most efficient, most optimal for your design; the toolchain has been fragmented, and you wonder whether there’s enough time, frankly, for your team to learn the new programming skills required when time to market pressures are worse than ever.
But you also have seen this movie before. Any sufficiently promising technology trend eventually resolves around standard approaches and an ecosystem that paves the way for new innovation. The same is true with AI and IoT, or as it’s increasingly known AIoT. For AIoT to scale to the trillions of predicted endpoint devices and for companies to take advantage of the enhanced insights and experiences offered by advanced AI, small IoT devices must become more capable of on-device processing. In other words, we need to shift compute closer to the source of data along the compute continuum from cloud to endpoint – what we call endpoint AI.