Intel’s Movidius Neural Compute Stick Supports Raspberry Pi 3 Board

Last month, Intel introduced Movidius Neural Computer Stick to accelerate applications such as object recognition, and do so offline, i.e. without the cloud, and at low power. While there was not that much information available at the time, the minimal requirements for the host machine were that it had to be a x86_64 computer running Ubuntu 16.04, and come with at least 1GB RAM, and 4GB storage.

So I understood the stick would only work attached with 64-bit Intel or AMD processors, and ARM development boards would not be an option. But today, I’ve found that Movidius had uploaded a new video showing a Python based object recognition demo with the Neural Compute Stick connected to the the Raspberry Pi 3 board. You just need to add a USB camera, copy ncapi directory from the SDK installed on your Ubuntu 16.04 development machine to the Debian Jessie installed on RPi 3 board, install the relevant .deb packages from that directory, and as well as some required packages (e.g. Gstreamer), and run one of the demos such as stream_infer as explained in the video.

Since all computing is supposed to happen in the stick, I’d assume this should work on other ARM development board with Debian and Gstreamer support. I understand you’ll need an Ubuntu PC to compile neural networks using the toolkit, but you can run inferencing on lower end ARM hardware.

Share this:
FacebookTwitterHacker NewsSlashdotRedditLinkedInPinterestFlipboardMeWeLineEmailShare

Support CNX Software! Donate via cryptocurrencies, become a Patron on Patreon, or purchase goods on Amazon or Aliexpress

ROCK 5 ITX RK3588 mini-ITX motherboard

8 Replies to “Intel’s Movidius Neural Compute Stick Supports Raspberry Pi 3 Board”

  1. I love the raspberry pi 3 but it is not meant for serious machine learning especially when combined with computer vision. The Movidius is quite inferior as well. Combining 2 inferior products giving us a slightly less inferior result is not the best way to go. If you have a decent rig with an i3/i5/i7. Just add a gtx1060 to it. That setup will be leagues better than RPI 3 plus movidius.

    As for the movidius, there’s a running joke where people are betting on how long it will take for Intel to kill it as they did with their iot stuff. Most estimates are between 6 to 24 months.

    1. @halherta
      You have obviously missed the point of the product. You cant strap something like that on a system with power/heat/size/weight restrictions.

      Of cause a bycycle generator is inferior to a wind turbine, but thats ok, because they forfill different needs.

  2. @cnxsoft: I would think once the work with the SDK is finished the final application can run on anything equipped with an USB port to connect the Movidius stick. And after taking the 3 minutes to look through https://youtu.be/4xud1T9DaFY it looks obvious to me why a beefy x64 host is needed for the initial steps (CPU, IO and even good Internet connection needed). Watching the Movidius integration in the Phantom 4 drone from DJI was also quite interesting 🙂

  3. I installed the SDK on the new MelE celeron based miniPC and it works fine.
    Copied the graphfiles and the debs into my RPI 2b and the launch the python stream example using my C920 camera.
    It can recognize basic stuff at a prety good speed (estimate 10-15 fps) just like running caffe on a I5 type Laptop.

    My use case is QuadCopter object tracking and Pose commands, so this little stick is really an interesting add-on to an onboard low power (Watt & Whetstone) Companion Computer

  4. Movidius Neural Compute Stick with RPi3 is about 3 times faster than the GPU on the Raspberry Pi when using the 12-cores of the stick for object recognition. YouTube Video: https://www.youtube.com/watch?v=v9_539oYufA

    Description:

    Comparison of deep learning inference acceleration by Movidius’ Neural Compute Stick (MvNCS) and by Idein’s software which uses Raspberry Pi’s GPU (VideoCore IV) without any extra computing resources.

    Movidius’ demo runs GoogLeNet with 16-bit floating point precision.Average inference time is 108ms.
    We used MvNC SDK 1.07.07 and their official demo script without any changes. (ncapi/py_examples/stream_infer/stream_infer.py)
    It seems something is wrong with the inference results.
    We recompiled graph file with -s12 option to use 12 SHAVE vector processor simultaneously.

    Idein’s demo also runs GoogLeNet with 32-bit floating point precision. Average inference time is 320ms.

  5. @crashoverride
    Are you realizing that the video you use for your Intel bashing is provided by ‘Idein Inc’ comparing Idein’s own commercial deep learning solution with unoptimized demo code from a competitor (even mentioning ‘It seems something is wrong with the inference results’)? The competitor is a fabless semiconductor company called ‘Movidius Ltd’ (recently bought by Intel)…

Leave a Reply

Your email address will not be published. Required fields are marked *

Boardcon Rockchip and Allwinner SoM and SBC products
Boardcon Rockchip and Allwinner SoM and SBC products