Waveshare Jetson Nano powered mini-computer features a sturdy metal case

Waveshare Launches Jetson Nano Mini Computer

Waveshare has launched the Jetson Nano Mini Kit A, a mini-computer kit powered by Jetson Nano. This kit features the Jetson Nano Module, a cooling fan, and a WiFi module, all inside a sturdy metal case. The mini-computer is built around Nvidia’s Jetson platform housing the Jetson Nano module and features multiple interfaces, including USB connectors, an Ethernet port, an HDMI port, CSI, GPIO, I2C, and RS485 interfaces. It also has an onboard M.2 B KEY slot for installing either a WiFi or 4G module and is compatible with TensorFlow, and PyTorch which makes it well-suited for various AI applications. Waveshare Mini-Computer Specification: GPU – NVIDIA Maxwell  architecture with 128 NVIDIA CUDA cores CPU – Quad-core ARM Cortex-A57 processor @ 1.43 GHz Memory – 4 GB 64-bit LPDDR4 1600 MHz; 25.6 GB/s bandwidth Storage – 16 GB eMMC 5.1 Flash Storage, microSD Card Slot Display Output – HDMI interface with […]

Arducam KingKong – A Raspberry Pi CM4-based Edge AI camera with global shutter sensor, Myriad X AI accelerator

Arducam KingKong

ArduCam KingKong is a Smart Edge AI camera based on the Raspberry Pi CM4 and system-on-module based on Intel Myriad X AI accelerator that follows the Raspberry Pi 5-powered Arducam PiINSIGHT camera introduced at the beginning of the year. The new product launch aims to provide a complete Raspberry Pi-based camera rather than an accessory for the Raspberry Pi 4/5. Smart cameras built around the Raspberry Pi CM4 are not new as we previously covered the EDATEC ED-AIC2020 IP67-rated industrial AI Edge camera and the StereoPi v2 stereoscopic camera used to create 3D video and 3D depth maps. The ArduCam KingKong adds another option suitable for computer vision applications with an AR0234 global shutter module, PoE support, and a CNC metal enclosure. ArduCam KingKong specifications: SoM – Raspberry Pi Compute Module 4 (CM4) by default CM4104000 Wireless 4GB RAM Lite (0GB eMMC). AI accelerator – Luxonis OAK SOM BW1099 based on Intel […]

$16 Grove Vision AI V2 module features WiseEye2 HX6538 Arm Cortex-M55 & Ethos-U55 AI microcontroller

Grove Vision AI V2 XIAO ESP32-C3 OV5647 camera

Seeed Studio’s Grove Vision AI V2 module is based on the HiMax WiseEye2 HX6538 dual-core Cortex-M55 AI microcontroller with an Arm Ethos-U55 microNPU and features a MIPI CSI connector for an OV5647 camera. It is designed for AI computer vision applications using TensorFlow and PyTorch frameworks and connects to hosts such as Raspberry Pi SBCs, ESP32 IoT boards, Arduino, and other maker boards over I2C. We tested the previous generation Grove Vision AI module based on the 400 MHz HX6537-A DSP-based AI accelerator using the SenseCAP K1100 sensor prototype kit with LoRaWAN connectivity, and managed to have the kit perform face detection and send the data over LoRaWAN. The Grove Vision AI V2 builds on that but with a modern Arm MCU core and more powerful AI accelerator that can run models such as Mobilenet V1/V2, Efficientnet-lite, and Yolo v5 & v8 using the SenseCraft low-code/no-code platform. Grove Vision AI […]

Firefly CT36L AI Smart Camera Features Rockchip RV1106G2 with 0.5 TOPS NPU, 100Mbps Ethernet with PoE support

CT36L AI Smart Camera with POE

Firefly CT36L AI Smart Camera (PoE) features a Rockchip RV1106G2 CPU with 0.5 TOPS NPU, a 5-megapixel ISP, and a 3-megapixel HD lens. It supports 100Mbps Ethernet with PoE and includes advanced image enhancements like HDR, WDR, and noise reduction, all while maintaining low power consumption and high image integration. We’ve previously explored various AI cameras such as Tokay Lite, EDATEC ED-AIC2020, ThinkCore TC-RV1126, Orbbec Persee+, M5Stack UnitV2 among others. Feel free to check them out for more information. Firefly CT36L AI Smart Camera (PoE) specifications: CPU – Rockchip RV1106G2 Arm Cortex-A7 @ 1.2GHz, with Neon and FPU NPU – 0.5 TOPS, supports INT4/INT8/INT16, TensorFlow/MXNet/PyTorch/Caffe/Onnx NN ISP – 5MP high-performance, HDR, WDR, 3DNR, 2DNR, sharpening, defogging, fisheye and gamma correction, feature detection VPU – 3072×1728 (5M) @ 30fps H.265/H.264 encoding, 16M@60FPS JPEG snapshot RAM – 128MB DDR3 built-in Storage – 16MB SPI Flash built-in Camera specifications: Type – Color Camera Image Sensor – SC3336 CMOS Size […]

Tokay Lite – A battery-powered no-code AI camera based on ESP32-S3 WiSoC (Crowdfunding)

Tokay Lite ESP32-S3 no-code AI camera

Maxlab’s Tokay Lite is an OHSWA-certified AI camera based on ESP32-S3 WiFI and Bluetooth SoC that can be used for computer vision (e.g. facial recognition & detection) and robotics applications without the need to know programming languages since a web interface is used for configuration. The WiFi and Bluetooth AI camera also features night vision with four IR LEDs, an IR cut filter, light and PIR motion sensors, a 20-pin expansion connector with SPI and UART, support for an external RTC, and can take power from USB-C or a LiPo battery. Tokay Lite specifications: Wireless module ESP32-S3-WROOM-1 MCU – ESP32-S3 dual-core LX7 microprocessor @ up to 240 MHz with Vector extension for machine learning, 512 KB SRAM Memory – 8MB PSRAM Storage – 8MB SPI flash Connectivity – WiFi 4 and Bluetooth 5 with LE/Mesh PCB antenna Certifications – FCC/CE certification Camera OV2640 camera sensor (replaceable) via DVP interface Image Capabilities: […]

Orange Pi AIPro SBC features a 20 TOPS Huawei Ascend AI SoC

Orange Pi Huawei Ascent SBC

Orange Pi AIPro is a new single board computer for AI applications that features a new (and unnamed) Huawei Ascend AI quad-core 64-bit processor delivering up to 20 TOPS (INT8) or 8 TOPS (FP16) of AI inference performance. The SBC comes with up to 16GB LPDDR4X and a 512Mbit SPI flash but also supports other storage options such as a microSD card, an eMMC flash module, and/or an M.2 NVMe or SATA SSD. The board also features two HDMI 2.0 ports, one MIPI DSI connector, and an AV port for video output, two MIPI CSI camera interfaces, Gigabit Ethernet and WiFi 5 connectivity, a few USB ports, and a 40-pin GPIO header for expansion. Orange Pi AIPro specifications: SoC – Huawei Ascend quad-core 64-bit (I’d assume RISC-V) processor delivering up to 20 TOPS (INT8) or 8TOPS (FP16) AI performance and equipped with an unnamed 3D GPU System Memory – 8GB […]

Cadence Neo NPU IP scales from 8 GOPS to 80 TOPS

Cadence Neo NPU

Cadence Neo NPU (Neural Processing Unit) IP delivers 8 GOPS to 80 TOPS in single core configuration and can be scaled to multicore configuration for hundreds of TOPS. The company says the Neo NPUs deliver high AI performance and energy efficiency for optimal PPA (Power, Performance, Area) and cost points for next-generation AI SoCs for intelligent sensors, IoT, audio/vision, hearables/wearables, mobile vision/voice AI, AR/VR and ADAS. Some highlights of the new Neo NPU IP include: Scalability – Single-core solution is scalable from 8 GOPS to 80 TOPS, with further extension to hundreds of TOPS with multicore Supports 256 to 32K MACs per cycle to allow SoC architects to meet power, performance, and area (PPA) tradeoffs Works with DSPs, general-purpose microcontrollers, and application processors Support for Int4, Int8, Int16, and FP16 data types for CNN, RNN and transformer-based networks. Up to 20x higher performance than the first-generation Cadence AI IP, with […]

TRACEPaw sensorized paw helps legged robots “feel the floor” with Arduino Nicla Vision

TRACEPaw

Our four-legged friends don’t walk on tarmac the same way as they do on ice or sand as they can see and feel the floor with their eyes and nerve endings and adapt accordingly. The TRACEPaw open-source project, which stands for “Terrain Recognition And Contact force Estimation through Sensorized Legged Robot Paw“, aims to bring the same capabilities to legged robots. Autonomous Robots Lab achieves this through the Arduino Nicla Vision board leveraging its camera and microphone to run machine learning models on the STM32H7 Cortex-M7 microcontroller in order to determine the type of terrain and estimate the force exercized on the leg. But the camera is apparently not used to look at the terrain, but instead, at the deformation of the silicone hemisphere – made of “Dragon Skin” – at the end of the leg to estimate 3D force vectors, while the microphone is used to recognize terrain types […]

Exit mobile version
UP 7000 x86 SBC