Getting Started with Raspberry Pi AI HAT+ (26 TOPS) and Raspberry Pi AI camera
Raspberry Pi recently launched several AI products including the Raspberry Pi AI HAT+ for the Pi 5 with 13 TOPS or 26 TOPS of performance and the less powerful Raspberry Pi AI camera suitable for all Raspberry Pi SBC with a MIPI CSI connector. The company sent me samples of the AI HAT+ (26 TOPS) and the AI camera for review, as well as other accessories such as the Raspberry Pi Touch Display 2 and Raspberry Pi Bumper, so I’ll report my experience getting started mostly following the documentation for the AI HAT+ and AI camera.
Hardware used for testing
In this tutorial/review, I’ll use a Raspberry Pi 5 with the AI HAT+ and a Raspberry Pi Camera Module 3, while I’ll connect the AI camera to a Raspberry Pi 4. I also plan to use one of the boards with the new Touch Display 2.
Let’s go through a quick unboxing of the new Pi AI hardware starting with the 26 TOPS AI HAT+.
The package includes the AI HAT+ itself with a Hailo-8 26 TOPS AI accelerator soldered on the board as opposed to an M.2 module like in the Raspberry Pi AI Kit which was the first such hardware launched by the company, as well as a 40-pin stacking header and some plastic standoffs and screws.
A ribbon cable is also connected to the HAT+, and the bottom side has no major components, only a few passive components, plus a good number of test points.
The Raspberry Pi AI Camera package features the camera module with a Sony IMX500 Intelligent Vision Sensor, 22-pin and 15-pin cables that fit the MIPI CSI connector on various Raspberry Pi boards, and a white ring used to adjust the focus manually.
For example, the 22-pin cable would be suitable for the Raspberry Pi 5, and the 15-pin cable for the Raspberry Pi 4, so we’ll use the latter in this review.
Here’s a close-up of the Raspberry Pi AI Camera module itself.
Raspberry Pi AI HAT+ installation on the Raspberry Pi 5
I would typically use my Raspberry Pi 5 with an NVMe SSD, but it’s not possible with the AI HAT+ when using the standard accessories Raspberry Pi provides. So I removed the SSD and HAT, and will boot Raspberry Pi OS from an official Raspberry Pi microSD card instead.
I could still keep the active cooler. The first part of the installation is to insert the GPIO stacking header on the Pi 5’s 40-pin GPIO header, install standoffs, and connect the PCIe ribbon cable as shown in the photo below.
Once done, we can insert the HAT+ in the male header and secure it with four screws.
I also connected the Raspberry Pi Camera Module 3 to the Pi 5, but I had to give up installing the SBC on the Touch Display 2 because it can’t be mounted when there’s HAT and I would not have been able to connect the power cable with the GPIO stacking header used here.
So I complete the build by removing four screws holding the standoffs, placing the Bumper on the bottom side, and securing it by tightening the screws back in place.
Raspberry Pi AI camera and Touch Display 2 installation with Raspberry Pi 4
Let’s now install the AI camera to our Raspberry Pi 4 by first connecting the 15-pin cable as shown below with the golden contacts facing towards the micro HDMI connectors.
That’s all there is to it if you’re going to use the hardware with an HDMI monitor. But we want to use the Raspberry Pi Touch Display 2, so we’ll need to connect the MIPI DSI flat cable and power cable as shown below.
The power cable can be inserted either way and the first time I made a mistake with the red wire on the left, so it didn’t work… It must be connected with the black cable on the left as shown in the photo above.
After that, we can insert the MIPI DSI cable into the Raspberry Pi 4 making the blue part of the cable face the black part of the connector, before securing the SBC with four screws. I attach the AI camera to the back of the display with some sticky tape. I wish Raspberry Pi had thought of some mount mechanism for their cameras…
But it does the job with the display placed on a smartphone holder.
Getting started with Raspberry Pi AI camera with RPICam-apps and Libcamera2 demos
The first step was to configure the display in landscape mode because Raspberry Pi OS will start in portrait mode by default. All I had to do was go to the Screen Layout Editor and select Layout->Screens->XWAYLAND0->Orientation->Right.
We can now install the firmware, software, and assets for the Raspberry Pi AI camera with a single command, plus a reboot.
1
2
sudo apt install imx500-all
sudo reboot
We can now try a few ” rpicam-apps” demos starting with object detection:
The network firmware upload took a few seconds the first time, but then it’s fast during subsequent tries. Since the image was not clear, I had to adjust the focus manually with the white focus ring provided with the camera.
The demo could easily detect a person and a teddy bear, but not the bottle whatever angle I tried. The video is quite smooth and inference is fast.
That’s all good, except the documentation and the actual command to run are not in sync at this time, but as noted above, I found a solution in the forums. I also realized that Scrot can’t do screenshots in Wayland (the resulting images are black), so I had to switch to the Grim utility instead…
The first time we ran the model it needed to be transferred to the camera, and it probably took about 2 minutes, but subsequent runs are fast. Again tracking is real-time, and there’s no lag that I could notice. I’ll show a video demo of body segmentation comparing the AI camera to the AI HAT+ demos later in this review.
You’ll find more models to play with in the imx500-models directory.
raise ValueError(f'tensor info length {len(tensor_info)} does not match expected size {size}')
ValueError:tensor info length260does notmatch expected size708
Again, another person had the size mismatch issue, but this time around, there does not seem to be an obvious solution. So as of November 24, 2024, the Picamera2 framework is not compatible with the AI Raspberry Pi AI camera. Hopefully, it will be fixed in the next few weeks. The documentation has more details of the architecture and instructions show how to deploy your own TensorFlow or Pytorch models, but it’s out of the scope of this getting started guide.
Note that a close to 900MB installation for the Hailo kernel device driver and firmware, HailoRT middleware software, Hailo Tappas core post-processing libraries, and the rpicam-apps Hailo post-processing software demo stages.
We can check whether the Hailo-8 AI accelerator is detected with the following command:
[HailoRT][warning]HEF was compiled forHailo8L device,whilethe device itself isHailo8.Thiswill result inlower performance.
[HailoRT][warning]HEF was compiled forHailo8L device,whilethe device itself isHailo8.Thiswill result inlower performance.
It’s finally working and the result is quite similar to the Raspberry Pi AI camera’s MobileNet demo, but YOLOv8 on the AI HAT+ also picks up my bottle… You’ll also note I added an orientation parameter because my camera mount/box requires the software to rotate the image by 180 degrees.
[HailoRT][warning]HEF was compiled forHailo8L device,whilethe device itself isHailo8.Thiswill result inlower performance.
[HailoRT][warning]HEF was compiled forHailo8L device,whilethe device itself isHailo8.Thiswill result inlower performance.
It’s very similar to the AI camera demo but adds nose and eye tracking, and virtually no delay, it starts immediately even the first time.
Note that all samples are compiled for the 13 TOPS Hailo-8L accelerator instead of the 26 TOPS Hailo-8 chip, but it does not seem to impact the performance from a user perspective. I’ve asked Raspberry Pi whether they have a demo specific to the 26 TOPS Hailo-8, and update this post after I give it a try.
The Raspberry Pi AI HAT+ documentation is not quite as detailed as the one for the Raspberry Pi AI camera, and we’re redirected to the hailo-rpi5-examples GitHub repo for more details and the Hailo community for support.
Raspberry Pi AI Camera vs AI HAT+ pose estimation demo
My first impression is that the object detection and pose estimation work similarly on the Raspberry Pi AI camera and the AI HAT+ on they are up and running.
The main difference for the pose estimation is that the very first run is slow to start on the AI camera because it takes time to transfer the model (almost two minutes). Subsequent runs are fast once the model is in the camera’s storage. As we’ve seen the Yolov8 pose estimation running on the AI HAT+ also adds eye and nose tracking, but it’s probably just due to the different model used instead of a limitation of the hardware.
While the expansion board is more powerful on paper, both the Raspberry Pi AI camera and AI HAT+ can run similar demos. The AI HAT+ relies on PCIe and will only work with the Raspberry Pi 5 and upcoming CM5 module, while the AI camera will work with any Raspberry Pi with a MIPI CSI connector. The demos load slower the first time on the AI camera, but apart from that there aren’t many differences. Documentation is not always in sync with the actual commands for either and I had to browse the forums to successfully run the demos. Both sell for the same price ($70) if we are talking about the 13 TOPS AI HAT+ kit, but note that most applications will require a Raspberry Pi camera module with the AI HAT+. The 26 TOPS AI HAT+ kit reviewed here sells for $110.
Jean-Luc started CNX Software in 2010 as a part-time endeavor, before quitting his job as a software engineering manager, and starting to write daily news, and reviews full time later in 2011.
This website uses cookies to improve your experience. We'll assume you're ok with this, but if you don't like these, you can remove them Accept
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the ...
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.