FOSSASIA Summit 2018 Schedule – March 22-25

FOSDEM is the “Free & Open Source Software Developers’ European  Meeting” takes place the first week-end of February every year in Brussels, Belgium.  It turns out there’s an event in Asia called FOSSASIA Summit that’s about to take place in Singapore on March 22-25. There are some differences however, as while FOSDEM is entirely free to attend, FOSSASIA requires to pay an entry fee to attend talks, although there are free tickets to access the exhibition hall and career fair. There are also less sessions as in FOSDEM, but still twelve different tracks with: Artificial Intelligence Blockchain Cloud, Container, DevOps Cybersecurity Database Kernel & Platform Open Data, Internet Society, Community Open Design, IoT, Hardware, Imaging Open Event Solutions Open Source in Business Science Tech Web and Mobile Since the event is spread out over four days, it should be easier to attend the specific sessions you are interested in. I’ve […]

Android P Developer Preview Released with Indoor Positioning, Display Notch Support, HDR VP9 Video, and More

Android-P-Cutout-Indoor-Location

Google has just announced the release of the first developer preview for Android P mobile operating system, as the company is looking for feedback from developers who can use the official Android emulator, as well as images for Pixel, Pixel XL, Pixel 2, and Pixel 2XL devices for testing Google will take into account comments from developers before finalizing the APIs and features. That won’t be the only preview however, as the company plans to release other developer previews planned before the stable release at the end of the year, and Google aims to reveal more at Google I/O 2018 next week. Some of the interesting changes and new features found in Android P so far: Indoor positioning with Wi-Fi RTT (Round Trip Time) also known as 802.11mc WiFi protocol Display cutout support for some of the new phones with a notch Improved messaging notifications, for example highlighting who is […]

Imagination Releases PowerVR CLDNN Neural Network SDK and Image for Acer Chromebook R13

Last month, Imagination Technology released their PowerVR CLDNN SDK, an AI-oriented API that leverages OpenCL support in PowerVR GX6250 GPU in order to  create network layers for constructing and running a neural network on PowerVR hardware. Eventually the SDK will support PowerVR Series2NX Neural Network Accelerator, but while waiting for the hardware, the company has provided a firmware that runs only on Mediatek MT8173 based Acer Chromebook R13. The SDK includes a demo taking a live camera feed to identify the object(s) the camera is pointing at, using known network models such as AlexNet, GoogLeNet, VGG-16, or SqueezeNet. All models are Caffe models trained against the ImageNet dataset, a benchmark function is included within the demo. Beside simply playing with the demos, you’ll be able to study the source code to check out various helper functions such as file loading, dynamic library initialisation and OpenCL context management, and read documentation such as the PowerVR CLDNN reference […]

Synthetic Sensors Combine Multiple Sensors and Machine Learning for General-Purpose Sensing

Sensors can be used to get specific data for example temperature & humidity or light intensity, or you can combine an array of sensors and leverage sensor fusion to combines data from the sensors to improve accuracy of measurement or detect more complex situation. Gierad Laput, Ph.D. student at Carnegie Mellon University, went a little further with what he (and the others he worked with) call Synthetic Sensors. Their USB powered hardware board includes several sensors, whose data can then be used after training through machine learning algorithms to detect specific events in a room, car, workshop, etc… List of sensors in the above board (at frequency at which data is gathered): PANASONIC GridEye AMG8833 IR thermal camera  (10 Hz) TCS34725 color to digital converter (10 Hz) MAG3110F magnetometer (10 Hz) BME280 temperature & humidity sensor, barometer (10 Hz) MPU6500 accelerometer (4 kHz) RSSI data out of 2.4 GHz WiFi […]

Arm’s Project Trillium Combines Machine Learning and Object Detection Processors with Neural Network Software

We’ve already seen Neural Processing Units (NPU) added to Arm processors such as Huawei Kirin 970 or Rockchip RK3399Pro in order to handle the tasks required by machine learning & artificial intelligence in a faster or more power efficient way. Arm has now announced their Project Trillium offering two A.I. processors, with one ML (Machine Learning) processor and one OD (Object Detection) processor, as well as open source Arm NN (Neural Network) software to leverage the ML processor, as well as Arm CPUs and GPUs. Arm ML processor key features and performance: Fixed function engine for the best performance & efficiency for current solutions Programmable layer engine for futureproofing the design Tuned for advance geometry implementations. On-board memory to reduce external memory traffic. Performance / Efficiency – 4.6 TOP/s with an efficiency of 3 TOPs/W for mobile devices and smart IP cameras Scalable design usable for lower requirements IoT (20 […]

Qualcomm Developer’s Guide to Artificial Intelligence (AI)

Qualcomm has many terms like ML (Machine Learning), DL (Deep Learning), CNN (Convolutional Neural Network),  ANN (Artificial Neural Networks), etc.. and is currently made possible via frameworks such as TensorFlow, Caffe2 or ONNX (Open Neural Network Exchange). If you have not looked into details, all those terms may be confusions, so Qualcomm Developer Network has released a 9-page e-Book entitled “A Developer’s Guide to Artificial Intelligence (AI)” that gives an overview of all the terms, what they mean, and how they differ. For example, they explain that a key difference between Machine Learning and Deep Learning is that with ML, the input features of the CNN are determined by humans, while DL requires less human intervention. The book also covers that AI is moving to the edge / on-device for low latency, and better reliability, instead of relying on the cloud. It also quickly go through the workflow using Snapdragon […]

AWS DeepLens is a $249 Deep Learning Video Camera for Developers

Amazon Web Services (AWS) has launched Deeplens, the “world’s first deep learning enabled video camera for developers”. Powered by an Intel Atom X5 processor with 8GB, and featuring a 4MP (1080p) camera, the fully programmable system runs Ubuntu 16.04, and is designed expand deep learning skills of developers, with Amazon providing tutorials, code, and pre-trained models. AWS Deeplens specifications: SoC – Intel Atom X5 Processor with Intel Gen9 HD graphics (106 GFLOPS of compute power) System Memory – 8GB RAM Storage – 16GB eMMC flash, micro SD slot Camera – 4MP (1080p) camera using MJPEG, H.264 encoding Video Output – micro HDMI port Audio – 3.5mm audio jack, and HDMI audio Connectivity – Dual band WiFi USB – 2x USB 2.0 ports Misc – Power button; camera, WiFi and power status LEDs; reset pinhole Power Supply – TBD Dimensions – 168 x 94 x 47 mm Weight – 296.5 grams The […]

Bolt IoT Platform Combines ESP8266, Mobile Apps, Cloud, and Machine Learning (Crowdfunding)

There are plenty of hardware to implemented IoT projects now, but in many cases a full integration to get data from sensors to the cloud requires going though a long list of instructions. Bolt IoT, an Indian and US based startup, has taken up the task to simplify IoT projects with their IoT platform comprised of ESP8266 Bolt WiFi module, a cloud service with machine learning capabilities, and mobile apps for Android and iOS. Bolt IoT module hardware specifications: Wireless Module – A.I Thinker ESP12 module based on ESP8266 WiSoC Connectivity – 802.11 b/g/n WiFi secured by WPA2 USB – 1x micro USB for power and programming Expansion – 4-pin female header and 7-pin female header with 5 digital I/Os, 1x analog I/O, and UART Misc – Cloud connection LED The hardware is not the most interesting part of Bolt IoT, since it offers similar functionalities as other ESP8266 boards. […]

UP 7000 x86 SBC