Raspberry Pi dev kit enables voice integration for new IoT apps

September 23, 2021 // By Rich Pell
Raspberry Pi dev kit enables voice integration for new IoT apps
Micro-acoustic, audio processing component provider Knowles Corporation has announced a Raspberry Pi-based development kit designed to bring voice, audio edge processing, and machine learning (ML) listening capabilities to devices and systems in a range of new industries.

The AISonic IA8201 Raspberry Pii Development Kit bundles all of the hardware, add-on open software and algorithms required to test, prototype, and debug voice and audio functionality and integration in new applications for smart home, consumer technology, industrial, and beyond. By leveraging the kit, says the company, product designers and engineers from OEM/ODM companies have a single tool to streamline design, development, and testing of technology that pushes the boundaries of voice and audio integration in their respective industries.

“Knowles designed this new kit to be the simplest and fastest way for product designers to prototype new innovations to address emerging use cases including contextually aware voice, ML listening, and real-time audio processing, that require flexible development tools to accelerate the design process, minimize development costs, and leverage new technological advances,” says Vikram Shrivastava, senior director, IoT Marketing at Knowles. “By selecting Raspberry Pi as the system host, we are opening up the ability to add voice and ML to the largest community of system developers that prefer a Linux or Android environment.”

The kit is built around the company's AISonic IA8201 Audio Edge Processor OpenDSP, for ultra-low power and high-performance for a plethora of audio processing needs. The audio edge processor combines two Tensilica-based, audio-centric DSP cores; one for high-power compute and AI/ML applications, and the other for very low-power, always-on processing of sensor inputs. The IA8201 has 1MB of RAM on-chip that allows for high bandwidth processing of advanced, always-on contextually aware ML use-cases and memory to support multiple algorithms.

Using the company's open DSP platform, the kit includes a library of on-board audio algorithms and AI/ML libraries. Farfield audio applications can be built using the available ultra-low power voice wake, beamforming, custom keywords, and background noise elimination algorithms from Knowles algorithm partners such as Amazon Alexa, Sensory, Retune, and Alango to open up the design possibilities and ensure the freedom needed to support a wide range of


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.