Radxa Fogwise Airbox edge AI box review – Part 1: Specifications, teardown, and first try

Radxa Fogwise Airbox review

Radxa Fogwise Airbox, also known as Fogwise BM168M, is an edge AI box powered by a SOPHON BM1684X Arm SoC with a built-in 32 TOPS TPU and a VPU capable of handling the decoding of up to 32 HD video streams. The device is equipped with 16GB LPDDR4x RAM and a 64GB eMMC flash and features two gigabit Ethernet RJ45 jacks, a few USB ports, a speaker, and more. Radxa sent us a sample for evaluation. We’ll start the Radxa Fogwise Airbox review by checking out the specifications and the hardware with an unboxing and a teardown, before testing various AI workloads with Tensorflow and/or other frameworks in the second part of the review. Radxa Fogwise Airbox specifications The specifications below come from the product page as of May 30, 2024: SoC – SOPHON SG2300x CPU – Octa-core Arm Cortex-A53 processor up to 2.3 GHz VPU Decoding of up to […]

Leveraging GPT-4o and NVIDIA TAO to train TinyML models for microcontrollers using Edge Impulse

Edge Impulse using NVIDIA TAO and GPT-4o LLM to run model on Arduino Nicla Vision

We previously tested Edge Impulse machine learning platform showing how to train and deploy a model with IMU data from the XIAO BLE sense board relatively easily. Since then the company announced support for NVIDIA TAO toolkit in Edge Impulse, and now they’ve added the latest GPT-4o LLM to the ML platform to help users quickly train TinyML models that can run on boards with microcontrollers. What’s interesting is how AI tools from various companies, namely NVIDIA (TAO toolkit) and OpenAI (GPT-4o LLM), are leveraged in Edge Impulse to quickly create some low-end ML model by simply filming a video. Jan Jongboom, CTO and co-founder at Edge Impulse, demonstrated the solution by shooting a video of his kids’ toys and loading it in Edge Impulse to create an “is there a toy?” model that runs on the Arduino Nicla Vision at about 10 FPS. Another way to look at it […]

Arm unveils Cortex-X925 and Cortex-A725 CPUs, Immortalis-G925 GPU, Kleidi AI software

Arm SoC with Cortex-X925 Cortex-A725 Cortex-A520 CPU with Immortalis-G925 GPU

Arm has just announced new Armv9 CPUs and Immortalis GPUs for mobile SoCs, as well as the Kleidi AI software optimized for Arm CPUs from Armv7 to Armv9 architectures. New Armv9.2 CPU cores include the Cortex-X925 “Blackhawk” core with significant CPU and AI performance improvements, the Cortex-A725 with improved performance efficiency, and a refreshed version of the Cortex-A520 providing 15 percent efficiency improvements. Three new GPUs have also been introduced namely the up-to-14-core Immortalis-G925 flagship GPU which delivers up to 37% 3D graphics performance improvements over last year’s 12-core Immortalis-G720, the Mali-G725 with 6 to 9 cores for premium mobile handsets, and the Mali-G625 GPU with one to five cores for smartwatches and entry-level mobile devices. Arm Cortex-X925 The Arm Cortex-X925 delivers 36 percent single-threaded peak performance improvements in Geekbench 6.2 against a Cortex-X4-based Premium Android smartphone, and about 41 percent better AI performance using the time-to-first token of tiny-LLama […]

picoLLM is a cross-platform, on-device LLM inference engine

picoLLM Raspberry Pi 5

Large Language Models (LLMs) can run locally on mini PCs or single board computers like the Raspberry Pi 5 but with limited performance due to high memory usage and bandwidth requirements. That’s why Picovoice has developed the picoLLM Inference Engine cross-platform SDK optimized for running compressed large language models on systems running Linux (x86_64), macOS (arm64, x86_64), and Windows (x86_64), Raspberry Pi OS on Pi 5 and 4, Android and iOS mobile operating systems, as well as web browsers such as Chrome, Safari, Edge, and Firefox. Alireza Kenarsari, Picovoice CEO, told CNX Software that “picoLLM is a joint effort of Picovoice deep learning researchers who developed the X-bit quantization algorithm and engineers who built the cross-platform LLM inference engine to bring any LLM to any device and control back to enterprises”. The company says picoLLM delivers better accuracy than GPTQ when using Llama-3.8B MMLU (Massive Multitask Language Understanding) as a […]

EdgeCortix SAKURA-II Edge AI accelerator deliver up to 60 TOPS in an 8W power envelope

SAKURA-II M.2 and PCIe Edge AI accelerators

EdgeCortix has just announced its SAKURA-II Edge AI accelerator with its second-generation Dynamic Neural Accelerator (DNA) architecture delivering up to 60 TOPS (INT8) in an 8Watts power envelope and suitable to run complex generative AI tasks such as Large Language Models (LLMs), Large Vision Models (LVMs), and multi-modal transformer-based applications at the edge. Besides the AI accelerator itself, the company designed a range of M.2 modules and PCIe cards with one or two SAKURA-II chips delivering up to 120 TOPS with INT8, 60 TFLOPS with BF16 to enable generative AI in legacy hardware with a spare M.2 2280 socket or PCIe x8/x16 slot. SAKURA-II Edge AI accelerator SAKURA-II key specifications: Neural Processing Engine – DNA-II second-generation Dynamic Neural Accelerator (DNA) architecture Performance 60 TOPS (INT8) 30 TFLOPS (BF16) DRAM – Dual 64-bit LPDDR4x (8GB,16GB, or 32GB on board) DRAM Bandwidth – 68 GB/sec On-chip SRAM – 20MB Compute Efficiency – […]

Snapdragon Dev Kit for Windows features Qualcomm Snapdragon X Elite Arm SoC for AI PC application development

Snapdragon Dev Kit for Windows

Qualcomm Snapdragon Dev Kit for Windows is a mini PC-looking development platform based on the Snapdragon X Elite 12-core Arm processor with up to 75 AI TOPS of performance designed to help developers natively port apps to the Elite X SoC and develop new AI applications besides the Copilot+ AI PC features developed internally by Microsoft. Although it’s slightly bigger, the external design looks similar to the Windows Dev Kit 2023 with a  Qualcomm Snapdragon 8cx Gen 3 compute platform, but internally, the new devkit features the much more powerful 4.3 GHz X Elite 12-core 64-bit Armv8 Oryon processor coupled with 32GB LPDDR5x RAM and 512GB NVMe SSD, and offering a range of ports and features such as USB4 and WiFi 7. Snapdragon Dev Kit for Windows (2024) specifications: SoC – Snapdragon X Elite (X1E-00-1DE) CPU – 12-core 64-bit Armv8 Oryon processor clocked at up to 3.8 GHz, or 4.3 […]

XGO-Rider is a 2-wheel self-balancing robot with an ESP32 controller plus either a Raspberry Pi CM4 or BBC Micro:bit (Crowdfunding)

XGO-Rider

XGO-Rider is a two-wheel self-balancing robot with an ESP32 controller for motor and servo control, USB-C charging, etc… and a choice between a Raspberry Pi CM4 module or a BBC Micro:bit board for display, audio, and camera (CM4-only). It’s not the first robot from Luwu Intelligence, since the company launched the XGO-Mini robot dog in 2021, followed by the XGO 2 Raspberry Pi CM4-powered desktop robotic dog with an arm which we reviewed last year. The new XGO-Rider builds on these earlier models but in a different form factor moving from four-legged robots to a 2-wheel self-balancing robot design with many of the same features including AI vision running on the Raspberry Pi CM4. XGO-Rider specifications: Host controller (one or the other) Raspberry Pi CM4 with 2GB RAM + ESP32 for main control, USB-C charging port, DIP switch BBC Micro:bit V2 + ESP32 for main control, USB-C charging port, DIP […]

Firefly AIBOX-1684X compact AI Box delivers 32 TOPS for large language models, image generation, video analytics, and more

SOPHON BM1684X AI Box

Firefly AIBOX-1684X is a compact AI Box based on SOPHON BM1684X octa-core Arm Cortex-53 processor with a 32 TOPS AI accelerator suitable for large language models (LLM) such as Llama 2, Stable Diffusion image generation solution, and traditional CNN and RNN neural network architectures. Firefly had already released several designs based on the SOPHON BM1684X AI processor with the full-featured Firefly EC-A1684XJD4 FD Edge AI computer and the AIO-1684XQ motherboard, but the AIBOX-1684X AI Box offers the same level of performance, just without as many interfaces, in a compact enclosure measuring just 90.6 x 84.4 x 48.5 mm. AIBOX-1684X AI box specifications: SoC – SOPHGO SOPHON BM1684X CPU – Octa-core Arm Cortex-A53 processor @ up to 2.3 GHz TPU – Up to 32TOPS (INT8), 16 TFLOPS (FP16/BF16), 2 TFLOPS (FP32) VPU Up to 32-channel H.265/H.264 1080p25 video decoding Up to 32-channel 1080p25 HD video processing (decoding + AI analysis) Up […]

EmbeddedTS embedded systems design