The natural language processing (NLP) and ambient sound solutions market appears to be growing steadily as new companies announce new product launches or investments. Specifically, Knowles announced a new Raspberry Pi Development Kit with voice, audio edge processing, and machine learning (ML) listening capabilities, and Deep Vision announced that it had raised $ 35 million in a Series B funding round has to further develop its edge biometrics -supporting processor. In addition, a new report from ABI Research highlighted the benefits of deep learning-based ambient sound and NLP in cloud and edge applications.
Knowles releases new Raspberry Pi Development Kit
The kit is designed to provide voice biometrics, audio edge processing, and ML listening capabilities to devices and systems in a variety of emerging industries.
The solution enables companies to streamline the design, development, and testing of voice and audio integration technologies.
The new development kit builds on knowledge” AISonic IA8201 Audio Edge Processor OpenDSP, designed for extremely low-power, high-performance audio processing needs.
The processor has two audio-centric DSP cores based on Tensilica. One of them for high performance computing and AI / ML applications and the other one for processing sensor input with very low power consumption which is always active.
Thanks to Knowles’ open DSP platform the new kit has provided access to a wide range of onboard audio algorithms and AI / ML libraries.
It also includes two microphone array boards to aid engineers in selecting the appropriate algorithm configurations for the end application.
Deep Vision raises $ 35 million for biometric applications
The AI ââprocessor chip maker recently announced that it has raised $ 35 million in a Series B financing round led by Tiger Global, with participation from Exfinity Venture Partners, SiliconMotion and Western Digital.
The fresh funds will reportedly support Deep Vision’s renewed efforts to improve its patented ARA-1 AI processor.
The hardware can be used as a facial biometric tool to provide real-time video analysis. However, ARA-1 also supports NLP functions for several voice-activated applications.
“To improve the latency and reliability of voice and other cloud services, edge products such as drones, security cameras, robots and smart retail applications implement complex and robust neural networks,” said Linley Gwennap, principal analyst for the Linley Group.
“With these edge AI applications, we see increasing demand for more performance, greater accuracy, and higher resolution,” added Gwennap. “This fast-growing market presents a great opportunity for Deep Vision’s AI accelerator, which offers impressive performance and low power consumption.”
Ambient sound and NLP-dedicated chipset on the rise
According to new data from ABI Research, more than two billion devices will be shipped with a dedicated chipset for ambient noise, or NLP, by 2026.
The numbers come from the “Deep Learning-Based Ambient Sound and Language Processing: Cloud to Edge” reporthighlighting the state of deep learning-based ambient noise and NLP technologies in various industries.
According to the report, ambient noise and NLP will follow the same evolutionary path from cloud to edge as machine vision.
“Thanks to efficient hardware and model compression technologies, this technology now requires fewer resources and can be completely embedded in end devices,” explains Lian Jye Su, Principal Analyst for Artificial Intelligence and Machine Learning at ABI Research.
âRight now, most implementations focus on simple tasks like wake-up word recognition, scene recognition, and voice biometrics. In the future, however, AI-enabled devices will offer more complex audio and voice processing applications, âadded Su.
According to the technology expert, many chipset manufacturers – including Qualcomm – are aware of this development and are now actively entering into partnerships to expand their capabilities.
“With multimodal learning, edge AI systems can become smarter and more secure when they combine insights from multiple data sources,” said Su.
“With federated learning, end users can personalize the voice AI in devices because the edge AI can improve based on learning from their unique local environment,” he concluded.
AI | Biometrics on the edge | Depth view | Edge AI | Financing | Knowledge | machine learning | Market report | Voice biometrics