AMD Ryzen Embedded V3000 COM Express Type 7 module supports up to 64GB DDR5 memory

AMD Ryzen V3000 COM Express Module

ADLINK Express VR7 is a COM Express Basic size Type 7 computer-on-module powered by the eight-core AMD Ryzen Embedded V3000 processor with two 10GbE interfaces, fourteen PCIe Gen 4 lanes, and optional support for the “extreme temperature range” between -40°C and 85°C. The COM Express module supports up to 64GB dual-channel DDR5 SO-DIMM  (ECC/non-ECC) memory and targets headless embedded applications such as edge networking equipment, 5G infrastructure at the edge, video storage analytics, intelligent surveillance, industrial automation and control, and rugged edge servers. Express-VR7 specifications: SoC – AMD Embedded Ryzen V3000 (one or the other Ryzen V3C48 8-core/16-thread processor @ 3.3/3.8GHz; TDP: 45W Ryzen V3C44 4-core/8-thread processor @ 3.5/3.8GHz; TDP: 45W Ryzen V3C18I 8-core/16-thread processor 1.9/3.8GHz; TDP: 15W (useful in the extreme temperature range) Ryzen V3C16 6-core/12-thread processor 2.0/3.8GHz; TDP: 15W Ryzen V3C14 4-core/8-thread processor 2.3/3.8GHz; TDP: 15W System Memory – Up to 64GB (2x 32GB) dual-channel ECC/non-ECC DDR5 memory […]

Generative AI on NVIDIA Jetson Orin, Jetpack 6 SDK to support multiple OSes

Jetson Orin Generative AI

NVIDIA has had several announcements at ROSCon 2023 related to robotics & embedded with highlights including generative AI on the NVIDIA Jetson Orin module and the Jetpack 6 SDK will be released next month (November 2023) with supports for Ubuntu as usual, but also other operating systems and platforms such as Debian, Yocto, Wind River, Redhawk RTOS, and Balena. Generative AI on NVIDIA Jetson Orin There’s been a lot of hype in the last year about generative AI thanks to services such as ChatGPT, Google Bard, or Microsoft Bing Chat. But those rely on closed-source software that runs on powerful servers in the cloud. As we noted in our article about the “AI in a box” offline LLM solution there are some open-source projects such as Whisper speech-to-text model and Llama2 language models that could be run on embedded hardware at the edge, but as noted by some readers platforms […]

AAEON BOXER-8224AI – An NVIDIA Jetson Nano AI Edge embedded system for drones

BOXER 8224AI board AI Edge NVIDIA Jetson Nano embedded system

AAEON BOXER-8224AI is a thin and lightweight AI edge embedded system solution based on NVIDIA Jetson Nano system-on-module and designed for drones, or other space-constrained applications such as robotics. AAEON BOXER products are usually Embedded Box PCs with an enclosure, but the BOXER-8224AI is quite different as it’s a compact and 22mm thin board with MIPI CSI interfaces designed to add computer vision capability to unmanned areal vehicles (UAV), as well as several wafers for dual GbE, USB, and other I/Os. BOXER-8224AI specifications: AI Accelerator – NVIDIA Jetson Nano CPU – Arm Cortex-A57 quad-core processor System Memory – 4GB LPDDR4 Storage Device – 16GB eMMC 5.1 flash Dimensions – 70 x 45 mm Storage – microSD slot Display Interface – 1x Mini HDMI 2.0 port Camera interface – 2x MIPI CSI connectors Networking 2x Gigabit Ethernet via wafer connector (1x NVIDIA, 1x Intel i210) Optional WiFi, Bluetooth, and/or cellular connectivity […]

Axiomtek AIE900-XNX – A 5G connected fanless Edge AI system for AMR, AGV, and computer vision

Axiomtek AIE900-XNX

Axiomtek AIE900-XNX is a fanless Edge AI computing system powered by NVIDIA Jetson Xavier NX system-on-module designed for autonomous mobile robots (AMR), automated guided vehicles (AGV), and other computer vision applications. The system delivers up to 21 TOPS thanks to the 6-core NVIDIA Carmel ARM v8.2 (64-bit) processor, NVDLA accelerators, and 384-core NVIDIA Volta architecture GPU found in the Jetson Xavier NX module. The AIE900-XNX Edge AI computer also comes with a 5G module for high-speed cellular connectivity and supports SerDes, PoE, and MIPI CSI cameras for video processing. Axiomtek AIE900-XNX specifications: NVIDIA Jetson Xavier NX system-on-module with CPU – 6-core NVIDIA Carmel Armv8.2 64-bit CPU with 6 MB L2 + 4 MB L3 cache GPU – 384-core NVIDIA Volta GPU with 48 Tensor Cores AI Accelerator – 2x NVDLA System Memory – 8GB 128-bit LPDDR4x onboard Storage – 16GB eMMC flash Storage –  M.2 Key M 2280 with PCIe […]

Vecow ABP-3000 AI Edge gateway combines Hailo-8 AI accelerator with Intel Whiskey Lake processor

Vecow ABP-3000-AI Hailo-8 accelerator

We first discovered Hailo-8 AI accelerator with claims of up to 26 TOPS performance and 3TOPS/W efficiency in October 2020. Since then, we’ve seen several integrate an Hailo-8 M.2 module into their design including EdgeTuring Edge AI camera and Vecow VAC-1000 gateway with a 24-core Foxconn processor. Vecow has now integrated the Hailo-8 AI accelerator into another gateway, but instead of relying on an Arm processor, the Vecow ABP-3000 AI computing system features an 8th generation Intel Core Whiskey Lake processor. Vecow ABP-3000 specifications: SoC – Intel Core i7-8665UE or i3-8145UE quad-core Whiskey Lake processor with Intel UHD Graphics 620; 15W TDP System Memory – 2x DDR4 2400MHz SO-DIMM, up to 64GB Storage – 1x M.2 Key B Socket (PCIe x2/SATA) AI Accelerator – Hailo-8 AI Processor, up to 26 TOPS with TensorFlow, ONNX frameworks support System IO chip – IT8786E Video Output – 2x DisplayPort up to 4096 x […]

UP 7000 x86 SBC