SolidRun launches i.MX 8M Plus SOM and devkit for AI/ML applications

SolidRun already offers NXP based solutions with AI accelerators through products such as SolidRun i.MX 8M Mini SoM with Gyrfalcon Lightspeeur 2803S AI accelerator, or Janux GS31 Edge AI server with NXP LX2160A networking SoC, various i.MX 8M SoCs and up to 128 Gyrfalcon accelerators. All those solutions are based on one or more external Gyrfalcon AI chips, but earlier this year, NXP introduced i.MX 8M Plus SoC with a built-in 2.3 TOPS neural processing unit (NPU), and now SolidRun has just unveiled the SolidRun i.MX 8M Plus SoM with the processor together with development kits based on HummingBoard carrier boards. Specifications: SoC – NXP i.MX 8M Plus Dual or Quad with dual or quad-core Arm Cortex-A53 processor @1.6 GHz (industrial) / 1.8 GHz (commercial), with Arm Cortex-M7 up to 800MHz, Vivante GC7000UL 3G GPU (Vulkan, OpenGL ES 3.1, OpenCL 1.2), 2.3 TOPS NPU, 1080p60 H.264/H.265 video encoder, 1080p60 video […]

Intel unveils eASIC N5X Structured ASIC, and the Open FPGA Stack

Intel Open FPGA Stack

Intel’s virtual FPGA Technology Day 2020 is taking place today, and the company made two announcements before the event. First, the company introduced the new Intel eASIC N5X structured eASIC family with an Intel FPGA compatible hard processor system to design to quickly create applications across 5G, artificial intelligence, cloud, and edge workloads. In addition, Intel also announced the Intel Open FPGA Stack (aka Intel OFS), a scalable, open-source (intel calls it “source-accessible”) hardware and software infrastructure available through git repositories design to ease the work of hardware, software, and application developers. Intel eASIC N5X eASIC N5X is the first structure ASIC from the company to integrate an Intel FPGA compatible Quad-core Armv8 hard processor system. The new chips will help customers bring custom solutions faster to market compared to traditional ASICs thanks to the FPGA fabric, and at a cheaper cost and with up to 50% lower core power […]

Reolink RLC-810A Smart 4K PoE IP Camera Specifications and Unboxing

Reolink RLC-810A Smart 4K PoE Camera Review

I have reviewed two Reolink WiFi IP cameras in recent years: Reolink Argus Eco and Reolink Argus PT. Both are powered by solar panels, and they’ve been running at home for many months, but there are many false positives, or on the contrary, sometimes the PIR sensor fails to detect people. What would solve this is built-in AI into those surveillance cameras. The good news is that Reolink RLC-810A does just that with the ability to detect persons and/or vehicles, so you would not receive a notification because some bird or insect flew in front of the camera.  I’ve just received a review sample, so I’ll part by listing the specs and features, and unboxing the package to see what the camera looks like, and check out included accessories. Reolink RLC-810A specifications Video & Audio Image Sensor – 1/2.49″ CMOS Sensor Video Resolution – 3840×2160 (8.0 Megapixels) at 25 frames/sec […]

InferX X1 SDK, PCIe and M.2 Boards for edge inference acceleration

InferX X1P1 PCIe Board

Last week, Flex Logix announced the InferX X1 AI Inference Accelerator at Linley Fall Conference 2020. Today, they announced the InferX X1 SDK, PCIe board, and M.2 board.  InferX X1 Edge Inference SDK  The  InferX Edge Inference SDK is simple and easy. The input to the compiler can be an open-source high-level, hardware-agnostic implementation of the neural network model that can be TensorFlow Lite or ONNX model. The compiler takes this model and looks for the available X1 resources and generates a binary executable file. This goes to the runtime which then takes the input stream, for example, a live feed from a camera. The user has to specify which compiler model, then the InferX X1 driver takes it and sends it to hardware.  The binary file generated is fed to InferX X1 through the runtime. Then it takes the input data stream with a user-specified model and gives the […]

Qualcomm QCS610 micro SoM and devkit to power AI and ML smart cameras

Qualcomm QCS610 Development Board

Last July, we missed Qualcomm’s announcement of QCS410 and QCS610 processors designed to bring “premium camera technology, including powerful artificial intelligence and machine learning features formerly only available to high-end devices, into mid-tier camera segments”. The new SoC’s were recently brought to our attention by Lantronix as they have just introduced a new Open-Q 610 micro system-on-module (μSOM) based on Qualcomm QCS610 processor, as well as a development kit designed to bring such smart cameras to market. I first got a bit confused by the product name, but this goes without saying that it is completely unrelated to Qualcomm Snapdragon 610 announced over six years ago. Open-Q 610 micro system-on-module Open-Q 610 specifications: SoC – Qualcomm QCS610 CPU – Octa-core processor with 2x Kryo 460 Gold cores @ 2.2 GHz (Cortex-A76 class), and 6x Kryo 430 Silver low-power cores @ 1.8GHz (Cortex-A55 class) GPU – Qualcomm Adreno 612 GPU @ […]

Flex Logix InferX X1 AI Inference Accelerator Takes on NVIDIA Jetson Xavier NX

InferX X1 AI Inference Accelerator

When it comes to AI inference accelerators, NVIDIA has captured the market as drones, intelligent high-resolution sensors, network video recorders, portable medical devices, and other industrial IoT systems use NVIDIA Jetson Xavier NX. This might change as Flex Logix’s InferX X1 AI inference accelerator has been shown to outperform Jetson Xavier NX as well as Tesla T4. During the Linley Fall Conference 2020, Flex Logix showcased InferX X1 AI Inference Accelerator, its performance, and how it outperformed other edge inference chips. It is the most powerful edge inference coprocessor with high throughput, low latency, high accuracy, large model megapixels images, and small die for embedded computing devices at the edge. The estimated worst-case TDP (933MHz, YOLOv3) is 13.5W. The coprocessor operates on the INT8 or BF16 precision over a batch size of 1 for minimum latency. The nnMAX Reconfigurable Tensor Processor accelerator engine exists in the edge inference coprocessor- InferX […]

Arm Ethos-U65 microNPU enables low-power AI inference on Cortex-A & Neoverse SoC’s

Ethos-U65 - Cortex-M vs Cortex A/Neoverse Diagrams

Arm introduced their very first microNPU (Micro Neural Processing Unit) for microcontrollers at the beginning of the year with Arm Ethos-U55 designed for Cortex-M microcontrollers such as Cortex-M55, and delivering 64 to 512 GOPS of AI inference performance or up to a 480x increase in ML performance over Cortex-M CPU inference. The company has now unveiled an update with Arm Ethos-U65 microNPU that maintains the efficiency of Ethos-U55 but enables neural network acceleration in higher performance embedded devices powered by Arm Cortex-A and Arm Neoverse SoCs. Arm Ethos-U65 delivers up to 1 TOPS, and as seen in the diagram enables features that can not be done with Ethos-U55 including object classification and real-time classification. Compared to Ethos-N78 NPU, the new microNPU offers less AI performance, but a significantly higher efficiency although AFAIK no quantified by Arm. The company says the development workflow remains the same with the use of the […]

Gumstix Introduces CM4 to CM3 Adapter, Carrier Boards for Raspberry Pi Compute Module 4

CM4 to CM3 Adapter

Raspberry Pi Trading has just launched 32 different models of Raspberry Pi CM4 and CM4Lite systems-on-module, as well as the “IO board” carrier board. But the company has also worked with third-parties, and Gumstix, an Altium company, has unveiled four different carrier boards for the Raspberry Pi Compute Module 4, as well as a convenient CM4 to CM3 adapter board that enables the use of Raspberry Pi CM4 on all/most carrier boards for the Compute Module 3/3+. Raspberry Pi CM4 Uprev & UprevAI CM3 adapter board Gumstix Raspberry Pi CM4 Uprev follows the Raspberry Pi Compute Module 3 form factor but includes two Hirose connectors for Computer Module 4. The signals are simply routed from the Hirose connectors to the 200-pin SODIMM edge connector used with CM3. Gumstix Raspberry Pi CM4 Uprev is the same except it adds a Google Coral accelerator module. Gumstix Raspberry Pi CM4 Development Board Specifications: […]

Exit mobile version
UP 7000 x86 SBC