CDVA (Compact Descriptors for Video Analysis) Enable “Video Understanding”

SuperCDVA CDVA Video Understanding

One of the most popular applications of artificial intelligence is object detection where you have models capable of detecting objects or subjects being cats, dogs, cars, laptops, or other. As I discovered in a press release by Gyrfalcon, there’s something similar for videos called CDVA (Compact Descriptors for Video Analysis) that’s capable of analyzing the scene taking place, and describe it in a precise manner. The CDVA standard, aka MPEG ISO/IEC 15938-15, describes how video features can be extracted and stored as compact metadata for efficient matching and scalable search. Gyrfalcon published a press release, their Lightspeeur line of AI chips will adapt CDVA. You can get the technical details in that paper entitled “Compact Descriptors for Video Analysis: the Emerging MPEG Standard”. CDVA still relies on (CNN Convoluted Neural Network) but do so but extracting frames first, append a timestamp and the encoded CDVA descriptor to the video, which […]

Getting Started with Sipeed M1 based Maixduino Board & Grove AI HAT for Raspberry Pi

Grove AI HAT Face Detection

Last year we discovered Kendryte K210 processor with a RISC-V core and featuring AI accelerators for machine vision and machine hearing. Soon after,  Sipeed M1 module was launched with the processor for aroud $10. Then this year we started to get more convenient development board featuring Sipeed M1 module such as Maixduino or Grove AI Hat. Seeed Studio sent me the last two boards for review. So I’ll start by showing the items I received, before showing how to get started with MicroPython and Arduino code. Note that I’ll be using Ubuntu 18.04, but development in Windows is also possible. Unboxing I received two packages with a Maixduino kit, and the other “Grove AI HAT for Edge Computing”. Grove AI HAT for Edge Computing Let’s start with the second. The board is a Raspberry Pi HAT with Sipeed M1 module, a 40-pin Raspberry Pi header, 6 grove connectors, as well […]

HuskyLens AI Camera & Display Board is Powered by Kendryte RISC-V Processor (Crowdfunding)

HuskyLens AI Camera

A couple of years ago, I reviewed JeVois-A33 computer vision camera  powered by Allwinner A33 quad-core Cortex-A7 processor running Linux. The tiny camera would implement easy-to-use software for machine vision with features such as object detection, eye tracking, QR code and ArUco marker detection, and so on. The camera could handle the tasks at hand, but since it relied on purely software computer vision, there were lag for some of the demo applications including 500ms for single object detection, and up to 3 seconds for YOLO test with multiple object types using deep learning algorithms. That’s a bit slow for robotics project, and software solutions usually consume more than hardware accelerated ones. Since then, we’ve started to see low-cost SoC and hardware with dedicated hardware AI accelerators, and one of those is Kendryte K210 dual-core RISC-V processor with a built-in KPU Convolutional Neural Network (CNN) hardware accelerator and APU audio […]

ZED Depth and Motion Tracking Camera Supports NVIDIA Jetson Nano Board

ZED depth camera Jetson Nano

When NVIDIA launched their low cost Jetson Nano development board earlier this week, one reader asked whether it would support binocular depth mapping. It turns out Stereo Labs has updated the SDK (Software Development Kit) for the ZED depth and motion tracking camera in order to support the latest NVIDIA developer kit. Jetson Nano can manage depth and positional tracking at 30 fps in PERFORMANCE mode with 720p resolution, and while the more powerful and expensive Jetson TX2 achieves doubles the performance at 60 fps, it does so at a much higher cost. ZED depth and motion tracking camera specifications: Video 2.2K @ 15 fps (4416×1242 resolution) 1080p @ 30 fps (3840×1080 resolution) 720p @ 60 fps (2560×720 resolution) WVGA @ 100 fps (1344×376 resolution) Depth Resolution – Same as selected video resolution Range – 0.5 to 20 m Format – 32-bits Stereo Baseline – 120 mm Motion 6-axis Pose […]

Inforce 6560 Snapdragon 660 Pico-ITX SBC Comes with 3 MIPI Camera Connectors

Inforce 6560

Inforce Computing has launched yet another Snapdragon-based single board computer with their Inforce 6560 SBC powered by Qualcomm Snapdragon 660 processor with stereoscopic depth sensing and deep learning capabilities made possible thanks to three MIPI camera connectors. The board also comes with to 3GB LPDDR4 RAM, 32GB flash, HDMI and MIPI DSI video outputs, Gigabit Ethernet, a wireless module, USB ports, sensors, and more. Inforce 6560 specifications: SoC – Qualcomm Snapdragon 660 (SDA660) with 8x Kryo ARMv8 compliant 64-bit CPUs arranged in two dual-clusters, running at 2.2GHz (Gold) and 1.8GHz (Silver) each, Adreno 512 GPU, Hexagon 680 DSP with dual-Hexagon vector processor (HVX-512) @ 787MHz for low-power audio and computer vision processing, Spectra 160 camera (dual) Image Signal Processors (ISPs) System Memory –     3GB onboard LPDDR4 RAM Storage – 32GB eMMC flash, 1x µSD card v3.0 socket Video Output / Display Interface HDMI V1.3a FullHD @ 60fps port 4-lane MIPI-DSI with […]

Adding Machine Learning based Image Processing to your Embedded Product

Convert model tensorflow runtime to NNEF

CNXSoft: This is a guest post by Greg Lytle, V.P. Engineering, Au-Zone Technologies. Au-Zone Technologies is part of the Toradex Partner Network. Object detection and classification on a low-power Arm SoC Machine learning techniques have proven to be very effective for a wide range of image processing and classification tasks. While many embedded IoT systems deployed to date have leveraged connected cloud-based resources for machine learning, there is a growing trend to implement this processing at the edge. Selecting the appropriate system components and tools to implement this image processing at the edge lowers the effort, time, and risk of these designs. This is illustrated with an example implementation that detects and classifies different pasta types on a moving conveyor belt. Example Use Case For this example, we will consider the problem of detecting and classifying different objects on a conveyor belt. We have selected commercial pasta as an example […]

Intel RealSense Tracking Camera T265 is Designed for Autonomous Devices

Intel RealSense Tracking Camera T265

Intel has just launched another smart camera with RealSense Tracking Camera T265 powered by the company’s Myriad 2 VPU (Vision Processing Unit) also found in the first Neural Compute Stick, and designed for autonomous robots, drones, and augmented/virtual reality applications. The T265 camera is said to use proprietary visual inertial odometry simultaneous localization and mapping (V-SLAM) technology “delivering high-performance guidance and navigation”. Intel RealSense tracking camera T265 hardware specifications: VPU – Intel Movidius Myriad 2 vision processing unit with 12 VLIW 128-bit vector SHAVE processors optimized to run V‑SLAM at low power Cameras – 2x Omnivision OV9282 high-speed image sensors with Fisheye lenses for a combined 163±5° FOV; infrared cut filter Sensor – Bosch BMI055 6-axis IMU (Inertial Measurement Unit) to measure rotation and acceleration of the device USB – 1x USB 3.1 Gen 1 Micro B port to transfer pose data, or pose + image data. Dimensions – 108 x 24.5 x 12.5 mm; 2x […]

Intel RealSense D435i Stereo Depth Camera Supports 6 Degrees of Freedom Tracking

RealSense D435i Camera

First unveiled in CES 2014, Intel RealSense Technology was introduced for perceptual computing application with hardware such as 3D sensing cameras, as well Nuance Dragon Assistant voice technology. Since then the company release various 3D sensing camera models and kits such as Realsense R200 Depth Camera robotics development kit, and just announced the new RealSense D435i stereo depth camera which adds 6 DoF (Degrees of Freedom) tracking over D435 model thanks to an inertial measurement unit (IMU). Intel RealSense Depth Camera D435i key features and specifications: Intel RealSense Vision Processor D4 – Purpose-built ASIC designed to deliver stereo depth data at up to 90fps at VGA resolutions or up to 1280×720 resolution at 30fps Intel RealSense module D430 – Depth camera imaging sub-system featuring a wide field of view (91.2 horizontal x 65.5 degrees vertical), global shutter stereo image sensors and an IR projector Depth Technology – Active IR stereo […]

EmbeddedTS embedded systems design