Machine learning on microcontrollers enables AI

0

One exciting way to go in the world of AI research and development is to find ways to downsize AI algorithms so that they run closer to sensors, motors, and people on smaller devices. The development of embedded AI applications that execute machine learning on microcontrollers has various limitations in terms of performance, performance, connectivity, and tools.

Embedded AI already has various applications: detection of types of movement with smartphone sensors, reaction to Wake-up words in consumer electronics, Monitoring industrial facilities and distinguishing family members from strangers in home surveillance cameras.

A number of new tools like TinyML and TensorFlow Lite can make it easier to develop smaller, more energy-efficient AI algorithms.

“The rise of TinyML deployed on microcontrollers enables intelligence to be distributed across more connected products in the physical world, be it smart home gadgets, toys, industrial sensors or others,” said Jason Shepherd, vice president of Ecosystem in edge computing Zededa platform.

Arm’s release of the AI ​​/ ML-optimized Cortex M55 core last year already catalyzed increasingly sophisticated and even lighter ones Microcontroller with embedded coprocessing to optimize both overall processing capacity and power consumption. New AI tools make it easier for developers without extensive experience with embedded software to train, optimize and deploy AI models on microcontroller-based hardware.

Making machine learning small

The biggest difference between CPUs and microcontrollers is that microcontrollers are often directly connected to sensors and actuators. This reduces latency, which is essential in safety-critical applications such as controlling brakes and industrial systems or responding to people.

“The big trend in the AI ​​industry is to move machine learning inference to the edge, where the sensor data is generated,” said Sang Won Lee, CEO of Qeexo, an AI platform for embedded systems.

New AI tools make it easier for developers without extensive experience with embedded software to train, optimize and deploy AI models on microcontroller-based hardware.

Ongoing inference to the Edge immediately offers valuable benefits such as reducing latency, bandwidth and power consumption. In addition, the availability is higher because you are not dependent on the cloud or a central server. Lee observed that running inference on microcontrollers typically consumes less than 5 milliwatts, compared to 800 milliwatts to send data to the cloud over a cellular network.

However, different microcontroller limits also pose new challenges for traditional AI workflows. Key constraints include limited energy, storage, hardware, and software environments.

David Kanter can attest to this in his role as Executive Director at MLCommons, an industry consortium that develops benchmarks, datasets and best practices for machine learning. Industry groups are starting to benchmark to help developers shortlist the appropriate combinations of microcontrollers, development tools and algorithms for different tasks like MLPerf Tiny.

What is a microcontroller?

Microcontrollers preceded the development of CPUs and GPUs and are embedded in virtually every type of modern device with sensors and actuators. They are an important consideration for companies interested in incorporating AI into physical devices, whether it be to enhance the user experience or to enable autonomous capabilities.

For example, AI Clearing has developed a drone platform that automatically records the progress of the construction site. Jakub Lukaszewicz, Head of AI at AI Clearing, said that microcontrollers were especially important to his team as they are often the main computers of drones responsible for flying and communicating with the operator.

He sees microcontrollers as low-end CPUs with limited computing power. There is many microcontroller variants on the market with different architectures and functionalities, but all have two decisive advantages over high-end CPUs: low costs and low power consumption.

The low cost makes them ideal for adding interactive functions to traditional devices such as toys or household appliances. In recent years, microcontrollers have made color displays and multimedia capabilities possible for these devices. Low power consumption enables the use of microcontrollers in Load-bearing, Cameras, or devices that run on a small battery for a long time.

AI on a low power microcontroller

Lukaszewicz is following a new trend in the development of microcontrollers with Integrated Neural Processing Units (NPUs), which are specialized units designed to efficiently run machine learning models on microcontrollers.

“Every major microcontroller manufacturer is preparing a product that is equipped with such a device,” he said.

These usually come with specialized SDKs which can convert neural networks prepared on a computer to fit on an NPU. These tools generally support models built using frameworks such as PyTorch, TensorFlow, and others thanks to the ONNX format. In addition, various third-party tools such as Latent AI and Edge Impulse are emerging to simplify AI development across different microcontrollers.

But these toolkits don’t support all of the operations available on larger CPUs with more RAM, Lukaszewicz noted. Some models are too big while others use unsupported operations. Often times, engineers need to trim a model or adapt its architecture for the NPU, which requires a lot of expertise and increases development time.

Donncha Carroll, a partner in Axiom Consulting Partners’ revenue growth practice who leads the Data Engineering and Science team, said developers must also weigh the tradeoffs between the lower cost of microcontrollers versus CPUs or GPUs and their flexibility. It is more difficult to quickly reconfigure or retrain embedded systems.

“A centralized solution with microprocessors sometimes makes more sense,” he said.

Planning for a small future

The limits of machine learning on microcontrollers are also inspiring new AI system designs.

“Microcontrollers are so computationally constrained that they power some of the most exciting work in model compression,” said Waleed Kadous, director of engineering for the distributed computing platform Anyscale.

Kadous, who was previously employed by Google, worked on the sensor hub in Android phones, which uses an ML model to determine if someone is standing still, walking, walking, riding to work, or riding a bike. He believes this is a typical use case to think about how low power embedded sensors can be distributed in the whole area.

One line of research is exploring ways to scale down large models to run on much smaller devices without losing too much accuracy. Another examines cascading complexity models, which combine fast models that decide whether something is of interest with more complex models for deeper analysis. This could allow an application to detect anomalies and then request another processor to take some action, e.g. B. uploading data to a cloud server.

In the future, Kadous expects that more general hardware for running ML models will move into microcontrollers. He also hopes for better model compression tools that will reflect the improvements in Compiler for microcontrollers.

This ultimately leads to tools for improving the performance of a particular microcontroller, not just what is in its environment. “I think ML will get into the execution of the microcontroller itself to do things like power management and squeeze out the last few milliwatts of power. ML will also improve the operational efficiency of the microcontroller slightly,” Kadous said.


Source link

Share.

Comments are closed.