NVIDIA Jetson AGX Orin 32GB production module is now available

NVIDIA Jetson AGX Orin 32GB Module

NVIDIA Jetson AGX Orin 32GB production module is now in mass production and available after the 12-core Cortex-A78E system-on-module was first announced in November 2021, and the Jetson AGX Orin developer kit was launched last March for close to $2,000. Capable of up to 200 TOPS of AI inference performance, or up to 6 times faster than the Jetson AGX, the NVIDIA Jetson AGX Orin 32GB can be used for AI, IoT, embedded, and robotics deployments, and NVIDIA says nearly three dozen partners are offering commercially available products based on the new module. Here’s a reminder of NVIDIA Jetson AGX Orin 32GB specifications: CPU – 8-core Arm Cortex-A78AE v8.2 64-bit processor with 2MB L2 + 4MB L3 cache GPU / AI accelerators NVIDIA Ampere architecture with 1792 NVIDIA CUDA cores and 56 Tensor Cores @ 1 GHz DL Accelerator – 2x NVDLA v2.0 Vision Accelerator – PVA v2.0 (Programmable Vision […]

reComputer J101/J202 carrier boards are designed for Jetson Nano/NX/TX2 NX SoM

recomputer J101 J201

Seeed Studio’s reComputer J101 & J202 are carrier boards with a similar form factor as the ones found in NVIDIA Jetson Nano and Jetson Xavier NX developer kits, but with a slightly different feature set. The reComputer J101 notably features different USB Type-A/Type-C ports, a microSD card, takes power from a USB Type-C port, and drops the DisplayPort connector, while the reComputer J201 board replaces the micro USB device port with a USB Type-C port, adds a CAN Bus interface, and switches to 12V power input instead of 19V. The table below summarizes the features and differences between the Jetson Nano devkit (B1), reComputer J101, Jetson Xavier NX devkit, and reComputer J202. Note the official Jetson board should also support production SoM with eMMC flash, but they do ship with a non-production SoM with a built-in MicroSD card socket instead. The carrier boards are so similar that if NVIDIA would […]

Benchmarks comparison between UP 4000, Raspberry Pi 4, UP board, and Jetson Nano

UP 4000 vs UP Board vs Raspberry Pi 4-vs-Jetson-Nano Phoronix benchmarks

We wrote about the UP 4000 SBC with an Intel Apollo Lake processor and Raspberry Pi form factor yesterday.  But today, I noticed the UP community had put up a benchmarks comparison between the UP 4000 board, the original UP board (Atom x5-8350), the Raspberry Pi 4, and NVIDIA Jetson Nano. They used several of the Phoronix Test Suite benchmarks running on Ubuntu 20.04 (x86) or Ubuntu 18.04 (Arm) on all four boards. The UP 4000 board used featured an Intel Celeron N3350 dual-core processor @ 2.40GHz, the 2GB RAM version of the UP Board, an RPi 4 with 4GB RAM, and a Jetson Nano developer kit with 4GB RAM. As one would have expected, the UP 4000 is ahead in most tests, even though they did not select a model with a quad-core processor such as a Pentium N4200. Note that reading the table may be confusing as for […]

Turing Pi 2 mini-ITX cluster board supports RK3588 based Turing RK1, Raspberry Pi CM4, and NVIDIA Jetson SoMs (Crowdfunding)

Turing Pi 2

We first covered the Turing Pi V2 mini-ITX cluster board supporting up to four Raspberry Pi CM4 or NVIDIA Jetson SO-DIMM system-on-module in August 2021. The company has now launched the Turing Pi 2 on Kickstarter with a little surprise: the Turing RK1 module with Rockchip RK3588 Cortex-A76/A55 processor and up to 32GB RAM. The board allows you to mix and match modules (e.g. 3x RPi CM4 + 1x Jetson module as on the photo below), and with SATA ports, Gigabit Ethernet networking, USB 3.0 ports, mPCIe socket, you could build a fairly powerful homelab, learn Kubernetes, or self-host your own apps. Turing Pi 2 specifications: SoM interface – 4x 260-pin SO-DIMM slots for up to four Raspberry Pi CM4 with Broadcom quad-core Cortex-A72 processor, up to 8GB RAM, up to 32GB eMMC flash (adapter needed) NVIDIA Jetson Nano/TX2 NX/Xavier NX SO-DIMM system-on-modules with up to 6x Armv8 cores, and […]

reServer Jetson-50-1-H4 is an AI Edge server powered by NVIDIA Jetson AGX Orin 64GB

Jetson AGX Orin 64GB AI inference server

reServer Jetson-50-1-H4 is an AI inference edge server powered by Jetson AGX Orin 64GB with up to 275 TOPS of AI performance, and based on the same form factor as Seeed Studio’s reServer 2-bay multimedia NAS introduced last year with an Intel Core Tiger Lake single board computer. The 12-core Arm server comes with 32GB LPDDR5, a 256GB NVMe SSD pre-loaded with the Jetpack SDK and the open-source Triton Inference server, two SATA bays for 2.5-inch and 3.5-inch drives, up to 10 Gbps Ethernet, dual 8K video output via HDMI and DisplayPort, USB 3.2 ports, and more. reServer Jetson-50-1-H4 (preliminary) specifications: SoM – Jetson AGX Orin module with CPU – 12-core Arm Cortex-A78AE v8.2 64-bit processor with 3MB L2 + 6MB L3 cache GPU / AI accelerators NVIDIA Ampere architecture with 2048 NVIDIA CUDA cores and 64 Tensor Cores @ 1.3 GHz DL Accelerator – 2x NVDLA v2.0 Vision Accelerator […]

NVIDIA NVDLA AI accelerator driver submitted to mainline Linux

NVDLA

A large patchset has been submitted to mainline Linux for NVIDIA NVDLA AI accelerator Direct Rendering Manager (DRM) driver, accompanied by an open-source user mode driver. The NVDLA (NVIDIA Deep Learning Accelerator) can be found in recent Jetson modules such as Jetson AGX Xavier and Jetson AGX Orin, and since NVDLA was made open-source hardware in 2017, it can also be integrated into third-party SoCs such as StarFive JH7100 Vision SoC and Allwinner V831 processor. I actually assumed everything was open-source already since we were told that NVDLA was a “complete solution with Verilog and C-model for the chip, Linux drivers, test suites, kernel- and user-mode software, and software development tools all available on Github’s NVDLA account.” and the inference compiler was open-sourced in September 2019. But apparently not, as developer Cai Huoqing submitted a patchset with 23 files changed, 13243 insertions, and the following short description: The NVIDIA Deep […]

NVIDIA launches Jetson AGX Orin Developer Kit, Orin NX modules, and Isaac Nova Orin AMR platform

NVIDIA Jetson AGX Orin Developer Kit

NVIDIA Jetson AGX Orin module was first introduced in November 2021, but the company has now officially launched the Jetson AGX Orin Developer Kit, andunveiled the lower cost Orin NX modules still with 70 TOPS or more, and the Isaac Nova Orin AMR (autonomous mobile robot) reference platform. NVIDIA Jetson AGX Orin Developer Kit Jetson Orin AGX developer kit specifications: Jetson AGX Orin module with CPU – 12-core Arm Cortex-A78AE v8.2 64-bit processor with 3MB L2 + 6MB L3 cache GPU / AI accelerators NVIDIA Ampere architecture with 2048 NVIDIA CUDA cores and 64 Tensor Cores @ 1.3 GHz DL Accelerator – 2x NVDLA v2.0 Vision Accelerator – PVA v2.0 (Programmable Vision Accelerator) AI Performance – Up to 275 TOPS (INT8) @ 60W Video Encode – 2x 4K60 | 4x 4K30 | 8x 1080p60 | 16x 1080p30 (H.265) Video Decode – 1x 8K30 | 3x 4K60 | 7x 4K30 | […]

Picovoice on-device speech-to-text engines slash the requirements and cost of transcription

Speech-to-text benchmarks accuracy

Picovoice Leopard and Cheetah offline, on-device speech-to-text engines are said to achieve cloud-level accuracy, rely on tiny Speech-to-Text models, and slash the cost of automatic transcription by up to 10 times. Leopard is an on-device speech-to-text engine, while Cheetah is an on-device streaming speech-to-text engine, and both are cross-platform with support for Linux x86_64, macOS (x86_64, arm64), Windows x86_64, Android, iOS, Raspberry Pi 3/4, and NVIDIA Jetson Nano. Looking at the cost is always tricky since companies have different pricing structures, and the table above basically shows the best scenario, where Picovoice is 6 to 20 times more cost-effective than solutions from Microsoft Azure or Google STT. Picovoice Leopard/Cheetah is free for the first 100 hours, and customers can pay a monthly $999 fee for up to 10,000 hours hence the $0.1 per hour cost with PicoVoice. If you were to use only 1000 hours out of your plan that […]

EmbeddedTS embedded systems design