The free whitepaper - " The Opportunity for AI at the Edge and Beyond " - explains the trade-offs of centralized cloud AI and extreme edge IoT endpoint processing in an easy-to-understand format. It looks at how two significant trends and a host of related issues are beginning to drive AI and ML towards the edge of the network and in many cases onto the actual endpoint devices themselves.
The first trend is that the number of IoT nodes is increasing dramatically each year; the second trend is that the amount of data being generated by each device is also increasing significantly. Moving AI to the edge of the network, says the company, is highly desirable in many cases as it often improves the following application characteristics:
- Autonomy: Independent insight and control is much easier if decisions are made locally
- Reliability: Dependency on cloud connections can be reduced or eliminated
- Security/Privacy: Risk of raw data interception significantly lowered
- Efficiency: Amount of data to be transmitted across the network is vastly reduced
- Responsiveness/Latency: No waiting while data is transmitted to cloud and back
Implementing intelligence at the extreme edge or IoT device has its own set of challenges, says the company. With processing and memory resources many of orders of magnitude smaller than that available in the data center, computational approaches popular in cloud AI are not well suited to most edge applications. Adapting cloud AI to run efficiently and fit on edge devices often involves hand-coding which is time and resource intensive and often impractical.
The whitepaper goes on to discuss applications that can benefit from moving AI to the edge of a network, the challenges associated with practical implementations, and possible solutions of new AutoML tools available from various vendors, including the company's own Analytics Toolkit .