Running AI Models on Microcontrollers Guide
The answer is Yes, it is possible for certain microcontrollers to run AI models.
For example, the following microcontrollers can be used:
· STM32 microcontrollers:
Using the X-Cube-AI extension package in STM32CubeMX, popular AI frameworks such as Keras, TFlite, and ONNX can be converted into C code to support use on embedded devices.
Figure: Neural Networks on STM32 MCUs
(Source: https://www.edge-ai-vision.com)
· NVIDIA Jetson series platforms:
Using the TensorRT engine, ONNX-format neural network models can be deployed to run on GPUs that support CUDA.
Figure: NVIDIA Jetson AI Kits
(Source: https://viso.ai/edge-ai/nvidia-jetson/)
· Arduino and Raspberry Pi:
Using lightweight frameworks such as TensorFlow Lite or PyTorch Lite, trained neural network models can be converted into formats suitable for embedded devices and called using corresponding libraries or interfaces.
Figure: AI Camera for Arduino and Raspberry Pi
(Source: https://www.dfrobot.com/huskylens.html)
· ARM microcontrollers:
Some simple AI models, such as neural networks, convolutional neural networks, and recurrent neural networks, can be run on ARM microcontrollers.
To better support AI applications on ARM microcontrollers, ARM provides some technologies and tools such as:
- Cortex-M55: This is a processor core optimized for AI, which supports vector extensions (Helium) and DSP extensions (MVE-DSP), and can improve AI computing efficiency.
- X-Cube-AI: This is a software extension package that can convert popular AI frameworks such as Keras, TFlite, and ONNX into C code, and configure and generate them in STM32CubeMX.
- CMSIS-NN: This is a software library that provides optimized functions and APIs for neural network operations, supporting different data types and hardware platforms.
Figure: AI applications on ARM microcontrollers
(Source: https://armkeil.blob.core.windows.net/)