Skip to main content

Posts

Showing posts with the label edge-to-ai

Using Raspberry Pi 3B+ with Apache NiFi MiNiFi and Google Coral Accelerator and Pimoroni Inky Phat

Using Raspberry Pi 3B+ with Apache NiFi MiNiFi and Google Coral Accelerator and Pimoroni Inky Phat Architecture Introduction First we need to unbox our new goodies.   The Inky Phat is an awesome E-Ink display with low power usage that stays displayed after shutdown!  Next I added a new Google Coral Edge TPU ML Accelerator USB Coprocessor to a new Raspberry Pi 3B+.    This was so easy to integrate and get up and running. Let's unbox this beautiful device (but be careful when it runs it can get really hot and there is a warning in the instructions).   So I run this on top of an aluminum case and with a big fan on it. Pimoroni Inky Phat It is pretty easy to set this up and it provides a robust Python library to write to our E-Ink display.   You can see an example screen here. https://github.com/pimoroni/inky Pimoroni Inky pHAT ePaper eInk Display in Red Pimoroni Inky Phat (Red) https://shop.pimoroni.com/products/inky-phat https://github.com

Edge to AI: Apache Spark, Apache NiFi, Apache NiFi MiNiFi, Cloudera Data Science Workbench Example

Use Case IoT Devices with Sensors, Cameras Overview In this, the third of the CDSW series, we build on using CDSW to classify images with a Python Apache MXNet model. In this use case we are receiving edge data from devices running MiniFi Agents that are collecting sensor data, images and also running edge analytics with TensorFlow. An Apache NiFi server collects this data with full data lineage using HTTP calls from the device(s). We then filter, transform, merge and route the sensor data, image data, deep learning analytics data and metadata to different data stores. As part of the flow we upload our images to a cloud hosted FTP server (could be S3 or any media store anywhere) and call a CDSW Model from Apache NiFi via REST and get the model results back as JSON. We are also storing our sensor data in Parquet files in HDFS. We then trigger a PySpark job from CDSW via API from Apache NiFi and check the status of that. We store the status result data in Parquet as well for PySpark SQL