Scanning Documents into Data Lakes via Tesseract, MQTT, Python, JSON, Records, TensorFlow, OpenCV and Apache NiFi

There are many awesome open source tools available to integrate with your Big Data Streaming flows.
Take a look at these articles for installation and why the new version of Tesseract is different.
I am officially recommending Python 3.6 or newer. Please don't use Python 2.7 if you don't have to. Friends don't let friends use old Python.
Tesseract 4 with Deep Learning
For installation on a Mac Laptop:
  1. brew install tesseract --HEAD
  2.  
  3. pip3.6 install pytesseract
  4.  
  5. brew install leptonica
Note: if you have tesseract already, you may need to uninstall and unlink it first with brew. If you don't use brew, you can install another way.
Summary
  1. Execute the run.sh (https://github.com/tspannhw/nifi-tesseract-python/blob/master/pytesstest.py) .
  2. It will send a MQTT message of the text and some other attributes in JSON format to the tesseract topic in the specified MQTT broker.
  3. Apache NiFi will read from this topic via ConsumeMQTT
  4. The flow checks to see if it's valid JSON via RouteOnContent.
  5. We run MergeRecord to convert a bunch of JSON into one big Apache Avro File
  6. Then we run ConvertAvroToORC to make a superfast Apache ORC file for storage
  7. Then we store it in HDFS via PutHDFS
Running The Python Script
You could have this also hooked up to a scanner or point it at a directory. You could also have it scheduled to run every 30 seconds or so. I had this hooked up to a local Apache NiFi instance to schedule runs. This can also be run by MiniFi Java Agent or MiniFi C++ agent. Or on demand if you wish.
Sending MQTT Messages From Python
  1. # MQTT
  2. client = mqtt.Client()
  3. client.username_pw_set("user","pass")
  4. client.connect("server.server.com", 17769, 60)
  5. client.publish("tesseract", payload=json_string, qos=0, retain=True)
You will need to run: pip3 install paho-mqtt
Create the HDFS Directory
  1. hdfs dfs -mkdir -p /tesseract

Create the External Hive Table (DDL Built by NiFi)
  1. CREATE EXTERNAL TABLE IF NOT EXISTS tesseract (`text` STRING, imgname STRING, host STRING, `end` STRING, te STRING, battery INT, systemtime STRING, cpu DOUBLE, diskusage STRING, memory DOUBLE, id STRING) STORED AS ORC
  2. LOCATION '/tesseract';

This DDL is a side effect, it's built by our ORC conversion and HDFS storage commands.
You could run that create script in Hive View 2, Beeline or another Apache Hive JDBC/ODBC tool. I used Apache Zeppelin since I am going to be doing queries there anyway.

Let's Ingest Our Captured Images and Process Them with Apache Tika, TensorFlow and grab the metadata
Consume MQTT Records and Store in Apache Hive
Let's look at other fields in Zeppelin
Let's Look at Our Records in Apache Zeppelin via a SQL Query (SELECT *FROM TESSERACT)
ConsumeMQTT: Give me all the record from the tesseract topic from our MQTT Broker. Isolation from our ingest clients which could be 100,000 devices.
MergeRecord: Merge all the JSON files sent via MQTT into one big AVRO File
ConvertAVROToORC: converts are merged AVRO file
PutHDFS
Tesseract Example Schema in Hortonworks Schema Registry
TIP: You can generate your schema with InferAvroSchema. Do that once, copy it and paste into Schema Registry. Then you can remove that step from your flow.
The Schema Text
  1. {
  2. "type": "record",
  3. "name": "tesseract",
  4. "fields": [
  5. {
  6. "name": "text",
  7. "type": "string",
  8. "doc": "Type inferred from '\"cgi cctong aiternacrety, pou can acces the complete Pro\\nLance repesiiry from eh Provenance mens: The Provenance\\n‘emu inchades the Date/Time, Actontype, the Unsque Fowie\\nTD and other sata. Om the ar it is smal exci i oe:\\n‘ick chs icon, and you get the flowin On the right, war\\n‘cots like three inthe cic soemecaed gether Liege:\\n\\nLineage ts visualined as « lange direcnad sqycie graph (DAG) char\\nSrones the seeps 1m she Gow where modifications oF routing ‘oot\\nplace on the Aewiike. Righe-iieit « step lp the Lineage s view\\nSetusls aboot the fowtle at that step ar expand the ow to ander:\\nScand where & was potentially domed frum. Af the very bottom\\nleft of the Lineage Oi a slider wath a play button to play the pro\\n“sing flow (with scaled ame} and understand where tbe owtise\\nSpent the meat Game of at whch PORN get muted\\n\\naide the Bowtie dealin, you cam: finn deed analy of box\\n\\ntern\\n=\"'"
  9. },
  10. {
  11. "name": "imgname",
  12. "type": "string",
  13. "doc": "Type inferred from '\"images/tesseract_image_20180613205132_c14779b8-1546-433e-8976-ddb5bfc5f978.jpg\"'"
  14. },
  15. {
  16. "name": "host",
  17. "type": "string",
  18. "doc": "Type inferred from '\"HW13125.local\"'"
  19. },
  20. {
  21. "name": "end",
  22. "type": "string",
  23. "doc": "Type inferred from '\"1528923095.3205361\"'"
  24. },
  25. {
  26. "name": "te",
  27. "type": "string",
  28. "doc": "Type inferred from '\"3.7366552352905273\"'"
  29. },
  30. {
  31. "name": "battery",
  32. "type": "int",
  33. "doc": "Type inferred from '100'"
  34. },
  35. {
  36. "name": "systemtime",
  37. "type": "string",
  38. "doc": "Type inferred from '\"06/13/2018 16:51:35\"'"
  39. },
  40. {
  41. "name": "cpu",
  42. "type": "double",
  43. "doc": "Type inferred from '22.8'"
  44. },
  45. {
  46. "name": "diskusage",
  47. "type": "string",
  48. "doc": "Type inferred from '\"113759.7 MB\"'"
  49. },
  50. {
  51. "name": "memory",
  52. "type": "double",
  53. "doc": "Type inferred from '69.4'"
  54. },
  55. {
  56. "name": "id",
  57. "type": "string",
  58. "doc": "Type inferred from '\"20180613205132_c14779b8-1546-433e-8976-ddb5bfc5f978\"'"
  59. }
  60. ]
  61. }
The above schema was generated by Infer Avro Schema in Apache NiFi.
Image Analytics Results
  1. {
  2. "tiffImageWidth" : "1280",
  3. "ContentType" : "image/jpeg",
  4. "JPEGImageWidth" : "1280 pixels",
  5. "FileTypeDetectedFileTypeName" : "JPEG",
  6. "tiffBitsPerSample" : "8",
  7. "ThumbnailHeightPixels" : "0",
  8. "label4" : "book jacket",
  9. "YResolution" : "1 dot",
  10. "label5" : "pill bottle",
  11. "ImageWidth" : "1280 pixels",
  12. "JFIFYResolution" : "1 dot",
  13. "JPEGImageHeight" : "720 pixels",
  14. "filecreationTime" : "2018-06-13T17:24:07-0400",
  15. "JFIFThumbnailHeightPixels" : "0",
  16. "DataPrecision" : "8 bits",
  17. "XResolution" : "1 dot",
  18. "ImageHeight" : "720 pixels",
  19. "JPEGNumberofComponents" : "3",
  20. "JFIFXResolution" : "1 dot",
  21. "FileTypeExpectedFileNameExtension" : "jpg",
  22. "JPEGDataPrecision" : "8 bits",
  23. "FileSize" : "223716 bytes",
  24. "probability4" : "1.74%",
  25. "tiffImageLength" : "720",
  26. "probability3" : "3.29%",
  27. "probability2" : "6.13%",
  28. "probability1" : "81.23%",
  29. "FileName" : "apache-tika-2858986094088526803.tmp",
  30. "filelastAccessTime" : "2018-06-13T17:24:07-0400",
  31. "JFIFThumbnailWidthPixels" : "0",
  32. "JPEGCompressionType" : "Baseline",
  33. "JFIFVersion" : "1.1",
  34. "filesize" : "223716",
  35. "FileModifiedDate" : "Wed Jun 13 17:24:27 -04:00 2018",
  36. "Component3" : "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
  37. "Component1" : "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
  38. "Component2" : "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
  39. "NumberofTables" : "4 Huffman tables",
  40. "FileTypeDetectedFileTypeLongName" : "Joint Photographic Experts Group",
  41. "fileowner" : "tspann",
  42. "filepermissions" : "rw-r--r--",
  43. "JPEGComponent3" : "Cr component: Quantization table 1, Sampling factors 1 horiz/1 vert",
  44. "JPEGComponent2" : "Cb component: Quantization table 1, Sampling factors 1 horiz/1 vert",
  45. "JPEGComponent1" : "Y component: Quantization table 0, Sampling factors 2 horiz/2 vert",
  46. "FileTypeDetectedMIMEType" : "image/jpeg",
  47. "NumberofComponents" : "3",
  48. "HuffmanNumberofTables" : "4 Huffman tables",
  49. "label1" : "menu",
  50. "XParsedBy" : "org.apache.tika.parser.DefaultParser, org.apache.tika.parser.ocr.TesseractOCRParser, org.apache.tika.parser.jpeg.JpegParser",
  51. "label2" : "web site",
  52. "label3" : "crossword puzzle",
  53. "absolutepath" : "/Volumes/seagate/opensourcecomputervision/images/",
  54. "filelastModifiedTime" : "2018-06-13T17:24:07-0400",
  55. "ThumbnailWidthPixels" : "0",
  56. "filegroup" : "staff",
  57. "ResolutionUnits" : "none",
  58. "JFIFResolutionUnits" : "none",
  59. "CompressionType" : "Baseline",
  60. "probability5" : "1.12%"
  61. }
This is built using a combination of Apache Tika, TensorFlow and other metadata analysis processors.

Creating An Email Bot in Apache NiFi (Consume and Send Email)

Creating An Email Bot in Apache NiFi


See:  https://community.cloudera.com/t5/Community-Articles/Creating-An-Email-Bot-in-Apache-NiFi/ta-p/249131


Some people say I must have a bot to read and reply to email at all crazy hours of the day. An awesome email assistant, well I decided to prototype it.

This is the first piece. After this I will add some Spark machine learning to intelligently reply to emails from a list of pretrained responses. With supervised learning it will learn what emails to send to who, based on Subject, From, Body Content, attachments, time of day, sender domain and many other variables.

For now, it just reads some emails and checks for a hard coded subject.

I could use this to trigger other processes, such as running a batch Spark job.

Since most people send and use HTML email (that's what Outlook, Outlook.com, Gmail do), I will send and receive HTML emails as to make it look more legit.

I could also run my fortune script and return that as my email content. Making me sound wise, or pull in a random selection of tweets about Hadoop or even recent news. Making the email very current and fresh.

Snippet Example of a Mixed Content Email Message (Attachments Removed to Save Space)

Return-Path: <x@example.com>
Delivered-To: nifi@example.com
Received: from x.x.net
    by x.x.net (Dovecot) with LMTP id +5RhOfCcB1jpZQAAf6S19A
    for <nifi@example.com>; Wed, 19 Oct 2016 12:19:13 -0400
Return-path: <x@example.com>
Envelope-to: nifi@example.com
Delivery-date: Wed, 19 Oct 2016 12:19:13 -0400
Received: from [x.x.x.x] (helo=smtp.example.com)
    by x.example.com with esmtp (Exim)
    id 1bwtaC-0006dd-VQ
    for nifi@example.com; Wed, 19 Oct 2016 12:19:12 -0400
Received: from x.x.net ([x.x.x.x])
    by x with bizsmtp
    id xUKB1t0063zlEh401UKCnK; Wed, 19 Oct 2016 12:19:12 -0400
X-EN-OrigIP: 64.78.52.185
X-EN-IMPSID: xUKB1t0063zlEh401UKCnK
Received: from x.x.net (localhost [127.0.0.1])
    (using TLSv1 with cipher AES256-SHA (256/256 bits))
    (No client certificate requested)
    by emg-ca-1-1.localdomain (Postfix) with ESMTPS id BEE9453F81
    for <nifi@example.com>; Wed, 19 Oct 2016 09:19:10 -0700 (PDT)
Subject: test
MIME-Version: 1.0
x-echoworx-msg-id: e50ca00a-edc5-4030-a127-f5474adf4802
x-echoworx-emg-received: Wed, 19 Oct 2016 09:19:10.713 -0700
x-echoworx-message-code-hashed: 5841d9083d16bded28a3c4d33bc505206b431f7f383f0eb3dbf1bd1917f763e8
x-echoworx-action: delivered
Received: from 10.254.155.15 ([10.254.155.15])
          by emg-ca-1-1 (JAMES SMTP Server 2.3.2) with SMTP ID 503
          for <nifi@example.com>;
          Wed, 19 Oct 2016 09:19:10 -0700 (PDT)
Received: from x.x.net (unknown [x.x.x.x])
    (using TLSv1 with cipher AES256-SHA (256/256 bits))
    (No client certificate requested)
    by emg-ca-1-1.localdomain (Postfix) with ESMTPS id 6693053F86
    for <nifi@example.com>; Wed, 19 Oct 2016 09:19:10 -0700 (PDT)
Received: from x.x.net (x.x.x.x) by
 x.x.net (x.x.x.x) with Microsoft SMTP
 Server (TLS) id 15.0.1178.4; Wed, 19 Oct 2016 09:19:09 -0700
Received: from x.x.x.net ([x.x.x.x]) by
 x.x.x.net ([x.x.x.x]) with mapi id
 15.00.1178.000; Wed, 19 Oct 2016 09:19:09 -0700
From: x x<x@example.com>
To: "nifi@example.com" <nifi@example.com>
Thread-Topic: test
Thread-Index: AQHSKiSFTVqN9ugyLEirSGxkMiBNFg==
Date: Wed, 19 Oct 2016 16:19:09 +0000
Message-ID: <D49AD137-3765-4F9A-BF98-C4E36D11FFD8@hortonworks.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [71.168.178.39]
x-source-routing-agent: Processed
Content-Type: multipart/related;
    boundary="_004_D49AD13737654F9ABF98C4E36D11FFD8hortonworkscom_";
    type="multipart/alternative"


--_004_D49AD13737654F9ABF98C4E36D11FFD8hortonworkscom_
Content-Type: multipart/alternative;
    boundary="_000_D49AD13737654F9ABF98C4E36D11FFD8hortonworkscom_"


--_000_D49AD13737654F9ABF98C4E36D11FFD8hortonworkscom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

Python Script to Parse Email Messages

#!/usr/bin/env python

"""Unpack a MIME message into a directory of files."""
import json
import os
import sys
import email
import errno
import mimetypes
from optparse import OptionParser
from email.parser import Parser

def main():
    parser = OptionParser(usage="""Unpack a MIME message into a directory of files.
Usage: %prog [options] msgfile
""")
    parser.add_option('-d', '--directory',
                      type='string', action='store',
                      help="""Unpack the MIME message into the named
                      directory, which will be created if it doesn't already
                      exist.""")
    opts, args = parser.parse_args()
    if not opts.directory:
 os.makedirs(opts.directory)
    try:
        os.mkdir(opts.directory)
    except OSError as e:
        # Ignore directory exists error
        if e.errno != errno.EEXIST:
            raise
    msgstring = ''.join(str(x) for x in sys.stdin.readlines())

    msg = email.message_from_string(msgstring)

    headers = Parser().parsestr(msgstring)
    response  = {'To': headers['to'], 'From': headers['from'], 'Subject': headers['subject'], 'Received': headers['Received']}
    print json.dumps(response)
    counter = 1
    for part in msg.walk():
        # multipart/* are just containers
        if part.get_content_maintype() == 'multipart':
            continue
        # Applications should really sanitize the given filename so that an
        # email message can't be used to overwrite important files
        filename = part.get_filename()
        if not filename:
            ext = mimetypes.guess_extension(part.get_content_type())
            if not ext:
                # Use a generic bag-of-bits extension
                ext = '.bin'
            filename = 'part-%03d%s' % (counter, ext)
        counter += 1
        fp = open(os.path.join(opts.directory, filename), 'wb')
        fp.write(part.get_payload(decode=True))
        fp.close()

if __name__ == '__main__':
    main()

mailnifi.sh

python mailnifi.py -d /opt/demo/email/"$@"

Python needs the email component for parsing the message, you can install via PIP.

pip install email

I am using Python 2.7, you could use a newer Python 3.x

Here is the flow:

11373-emailassistantflow.png

11375-consumepop3.png

11376-emailparseflow.png

11374-attributes.png

For the final part of the flow, I read the files created by the parsing, load them to HDFS and delete from the file system using the standard GetFile.

11371-readparsedemailsflow.png

 

Reference:

Files:

email-assistant-12-jan-2017.xml


undefined

Simple Leprechaun Detector.... And then how to make it more advanced

Okay, maybe just detect anyone or anything moving.   Let's say a Leprechaun if you have a kid that builds a Leprechaun trap.

The easy one is to use a USB web camera, Raspberry Pi and Motion software.

The second version we will add Apache NiFi - MiNiFi which will read the images and send them on.

Github:   https://github.com/tspannhw/leprechaun-detector/tree/master

This will send you an image when one is detected.

Install the Motion Detector


 apt-get install motion -y

Edit the Configuration


 /etc/motion/motion.conf 


Some needed configuration

# Command to be executed when an event starts. (default: none)# An event starts at first motion detected after a period of no motion defined by event_gap; on_event_start  /opt/demo/runmotion.sh %f



Store your images somewhere MiNiFi can grab them


target_dir /opt/demo/images

Start the Motion Detector


/etc/init.d/motion start




References





Posting Images to Imgur via Apache NiFi Using Custom Processor

Posting Images to Imgur via Apache NiFi Using Custom Processor

As part of a flow from a web camera, I decided that imgur would be a nice place to push images that I can reference publically in Cloudera Data Science Workbench calls for processing with Apache MXNet GluonCV YOLOv3.

I updated my custom processor since I needed a header.

I should make this allow for multiple headers and more.

For now, I'll stick with this.   This is built for Apache NiFi 1.9.0 and updated parameters.






PostImage Processor NAR Release
https://github.com/tspannhw/nifi-postimage-processor/releases/tag/1.1

Imgur

https://apidocs.imgur.com/

Sign up for the API to use this and head there limits.   This is for non-commercial purposes.

Here is an example image uploaded to imgur


Results From HTTP Post


post.header
{Transfer-Encoding=[chunked], Server=[nginx/1.13.5], Access-Control-Allow-Methods=[POST, GET, OPTIONS, PATCH, PUT, DELETE], Connection=[close], X-Ratelimit-Userlimit=[2000], X-Post-Rate-Limit-Reset=[52], X-Ratelimit-Clientreset=[86400], Date=[Fri, 15 Mar 2019 20:32:46 GMT], Access-Control-Allow-Headers=[Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization], X-Ratelimit-Userreset=[3600], X-Ratelimit-Userremaining=[1999], Strict-Transport-Security=[max-age=15724800; includeSubDomains;], Cache-Control=[no-store, no-cache, must-revalidate, post-check=0, pre-check=0], Access-Control-Allow-Credentials=[true], X-Post-Rate-Limit-Remaining=[1244], X-Ratelimit-Clientlimit=[12500], X-Post-Rate-Limit-Limit=[1250], X-Ratelimit-Clientremaining=[12499], Content-Type=[application/json]}
post.results
{"data":{"in_most_viral":false,"ad_type":0,"link":"https://i.imgur.com/NEfUOaY.jpg","description":null,"section":null,"title":null,"type":"image/jpeg","deletehash":"oRHxGI63iyEligc","datetime":1552681953,"has_sound":false,"id":"NEfUOaY","in_gallery":false,"vote":null,"views":0,"height":480,"bandwidth":0,"nsfw":null,"is_ad":false,"edited":"0","ad_url":"","tags":[],"account_id":0,"size":368339,"width":640,"account_url":null,"name":"","animated":false,"favorite":false},"success":true,"status":200}
post.status
OK
post.statuscode
200


Posting Images to Slack from Apache NiFi Using Custom Processor

Posting Images to Slack from Apache NiFi Using Custom Processor

As part of one of my remote camera feed projects, I wanted to send the images to Slack.

So I used my PostImage processor to send them via REST API.


It's a very simple flow.










PostImage Processor NAR Release
https://github.com/tspannhw/nifi-postimage-processor/releases/tag/1.1

Example Results

post.header
{X-Cache=[Miss from cloudfront], X-Accepted-OAuth-Scopes=[files:write:user,post], Server=[Apache], Access-Control-Allow-Origin=[*], X-Content-Type-Options=[nosniff], Connection=[keep-alive], Pragma=[no-cache], Date=[Mon, 11 Mar 2019 20:14:33 GMT], Access-Control-Allow-Headers=[slack-route, x-slack-version-ts], Via=[1.1 d0c5747a41ab1b19c48bdc3c7feed516.cloudfront.net (CloudFront)], Referrer-Policy=[no-referrer], Access-Control-Expose-Headers=[x-slack-req-id], Strict-Transport-Security=[max-age=31536000; includeSubDomains; preload], Cache-Control=[private, no-cache, no-store, must-revalidate], X-Via=[haproxy-www-ozs9], X-Slack-Req-Id=[7a42ad8f-bfcf-4b30-a3a6-7d38fb2b1e4a], X-Amz-Cf-Id=[Gr2gyXOdTmRLpTXssuFruYmk_D-487WBNdMtPzjlVj7SrLgsjLYXqw==], Vary=[Accept-Encoding], Expires=[Mon, 26 Jul 1997 05:00:00 GMT], X-XSS-Protection=[0], X-OAuth-Scopes=[identify,bot:basic], Content-Type=[application/json; charset=utf-8]}
post.results
{"file":{"filetype":"jpg","thumb_360":"https://files.slack.com/files-tmb/T1SD6MZMF-FGV6N568J-f7d3118d9a/2019-03-11_1547_360.jpg","thumb_160":"https://files.slack.com/files-tmb/T1SD6MZMF-FGV6N568J-f7d3118d9a/2019-03-11_1547_160.jpg","thumb_480":"https://files.slack.com/files-tmb/T1SD6MZMF-FGV6N568J-f7d3118d9a/2019-03-11_1547_480.jpg","title":"2019-03-11 1547","original_h":480,"ims":[],"mode":"hosted","shares":{"public":{"CGU6WRSNL":[{"channel_name":"images","reply_users":[],"reply_users_count":0,"team_id":"T1SD6MZMF","reply_count":0,"ts":"1552335275.020900"}]}},"image_exif_rotation":1,"url_private":"https://files.slack.com/files-pri/T1SD6MZMF-FGV6N568J/2019-03-11_1547.jpg","id":"FGV6N568J","display_as_bot":false,"timestamp":1552335273,"thumb_64":"https://files.slack.com/files-tmb/T1SD6MZMF-FGV6N568J-f7d3118d9a/2019-03-11_1547_64.jpg","thumb_80":"https://files.slack.com/files-tmb/T1SD6MZMF-FGV6N568J-f7d3118d9a/2019-03-11_1547_80.jpg","created":1552335273,"editable":false,"thumb_480_w":480,"is_external":false,"thumb_360_h":270,"groups":[],"pretty_type":"JPEG","external_type":"","url_private_download":"https://files.slack.com/files-pri/T1SD6MZMF-FGV6N568J/download/2019-03-11_1547.jpg","permalink_public":"https://slack-files.com/T1SD6MZMF-FGV6N568J-b58ce07115","is_starred":false,"size":367476,"channels":["CGU6WRSNL"],"comments_count":0,"name":"2019-03-11_1547.jpg","is_public":true,"thumb_360_w":360,"mimetype":"image/jpeg","public_url_shared":false,"permalink":"https://nifi-se.slack.com/files/UG2L4DSM9/FGV6N568J/2019-03-11_1547.jpg","user":"UG2L4DSM9","original_w":640,"username":"","thumb_480_h":360},"ok":true}
post.status
OK
post.statuscode
200

Text Generation as a Service with Cloudera Data Science Workbench


Fortunately there is an awesome Text Generating Neural Network in Python 3 with Tensorflow/Keras by Max Woolf.

It is very easy to wrap this in a REST API from CDSW to use with Apache NiFi or microservices in your organization.

Here is my simple CDSW Model:

from time import gmtime, strftime
import os
import time
import psutil
from time import gmtime, strftime

# https://github.com/minimaxir/textgenrnn 
# To Install pip3 install textgenrnn
# Text Generation RNN
#
def textgeneration(args):
  
  # sentence = args["sentence"]
  
  start = time.time()
  textgen = textgenrnn()
  newtextstring = generated_texts = textgen.generate(n=1, temperature=0.5, return_as_list=True)
  end = time.time()
  row = { }
  row['starttime'] = '{0:.2f}'.format(start)
  row['sentence'] = str(newtextstring[0])
  row['endtime'] = '{0:.2f}'.format(end)
  row['runtime'] = '{0:.2f}'.format(end - start)
  row['systemtime'] = datetime.datetime.now().strftime('%m/%d/%Y %H:%M:%S')
  row['cpu'] = psutil.cpu_percent(interval=1)
  row['memory'] = psutil.virtual_memory().percent

  result = row

  return result

Python Setup


pip3.6 install tensorflow
pip3.6 install textgenrnn




Example Run


args = {}
textgeneration(args)
2019-03-13 01:31:59.772430: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
{'cpu': 0.3,
 'endtime': '1552440721.04',
 'memory': 18.6,
 'runtime': '1.29',
 'sentence': "A female programming bank - World's father",
 'starttime': '1552440719.75',
 'systemtime': '03/13/2019 01:32:01'}

Resources:

https://minimaxir.com/2018/05/text-neural-networks/

https://github.com/minimaxir/textgenrnn


Apache Deep Learning 201: Prework - Upgrades

Updates to GluonCV, Apache MXNet and more...

You will need to upgrade to Apache MXNet 1.4.0 and upgrade GluonCV to 0.4.0+.

Newest Example
https://gluon-cv.mxnet.io/build/examples_pose/cam_demo.html



Python Installs
pip3.6 install mxnet-mkl>=1.3.0 --upgrade
pip3.6 install gluoncv --pre --upgrade

Resources




Using Raspberry Pi 3B+ with Apache NiFi MiNiFi and Google Coral Accelerator and Pimoroni Inky Phat


Using Raspberry Pi 3B+ with Apache NiFi MiNiFi and Google Coral Accelerator and Pimoroni Inky Phat

Architecture



Introduction

First we need to unbox our new goodies.   The Inky Phat is an awesome E-Ink display with low power usage that stays displayed after shutdown! 

Next I added a new Google Coral Edge TPU ML Accelerator USB Coprocessor to a new Raspberry Pi 3B+.    This was so easy to integrate and get up and running.

Let's unbox this beautiful device (but be careful when it runs it can get really hot and there is a warning in the instructions).   So I run this on top of an aluminum case and with a big fan on it.







Pimoroni Inky Phat

It is pretty easy to set this up and it provides a robust Python library to write to our E-Ink display.   You can see an example screen here.

https://github.com/pimoroni/inky
Pimoroni Inky pHAT ePaper eInk Display in Red


Pimoroni Inky Phat (Red)


https://shop.pimoroni.com/products/inky-phat
https://github.com/pimoroni/inky
https://pillow.readthedocs.io/en/stable/reference/ImageDraw.html
https://learn.pimoroni.com/tutorial/sandyj/getting-started-with-inky-phat


Install Some Python Libraries and Debian Install for Inky PHAT and Coral

pip3 install font_fredoka_one
pip3 install geocoder
pip3 install fswebcam
sudo apt-get install fe
pip3 install psutil
pip3 install font_hanken_grotesk
pip3 install font_intuitive
wget http://storage.googleapis.com/cloud-iot-edge-pretrained-models/edgetpu_api.tar.gz
These libraries are for the Inky, it needs fonts to write.   The last TAR is for the Edge device and is a fast install documented well by Google.

Download Apache NiFi - MiNiFi Java Agent

https://nifi.apache.org/minifi/download.html

Next up, the most important piece.  You will need to have JDK 8 installed on your device if you are using the Java agent.   You can also use the MiniFi C++ Agent but that may require building it for your OS/Platform.   That has some interesting Python running abilities.


Google Coral Documentation - Google Edge TPU
  • Google Edge TPU ML accelerator coprocessor
  • USB 3.0 Type-C socket
  • Supports Debian Linux on host CPU
  • ASIC designed by Google that provides high performance ML inferencing for TensorFlow Lite models


Using Pretrained Tensorflow Lite Model:

Inception V4 (ImageNet)
Recognizes 1,000 types of objects
Dataset: ImageNet
Input size: 299x299

Let's run a flow!

I can run this Python3 script every 10 seconds without issues that includes capturing the picture, running it through classification with the model, forming JSON data, grabbing network and device stats, forming a JSON file and completing in under 5 seconds.   Our MiNiFi agent is scheduled to call the script every 10 seconds and grab images after 60 seconds. 


MiNiFi Flow



Flow Overview



Apache NiFi Flow





Results (Once an hour we update our E-Ink Display with Date, IP, Run Time, Label 1)





Example JSON Data

{"endtime": "1552164369.27", "memory": "19.1", "cputemp": "32", "ipaddress": "192.168.1.183", "diskusage": "50336.5", "score_2": "0.14", "score_1": "0.68", "runtime": "4.74", "host": "mv2", "starttime": "03/09/2019 15:46:04", "label_1": "hard disc, hard disk, fixed disk", "uuid": "20190309204609_05c9a240-d801-4bac-b029-e5bf38c02d40", "label_2": "buckle", "systemtime": "03/09/2019 15:46:09"}

Example Slack Alert


PS3 Eye USB Camera Capturing an Image


Image It Captured




Source Code

https://github.com/tspannhw/nifi-minifi-coral

Convert Your Flow To Config.YML For MiniFi (Look for a major innovation here soon).

 ./config.sh transform Coral_MiniFi_Agent_Flow.xml config.yml
config.sh: JAVA_HOME not set; results may vary

Java home: 
MiNiFi Toolkit home: /Volumes/TSPANN/2019/apps/minifi-toolkit-0.5.0



No validation errors found in converted configuration.


Example Call From MiNiFi 0.5.0 Java Agent to Apache NiFi 1.9.0 Server


2019-03-09 16:21:01,877 INFO [Timer-Driven Process Thread-10] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=Coral Input,targets=http://hw13125.local:8080/nifi] Successfully sent [StandardFlowFileRecord[uuid=eab17784-2e76-4438-a60a-fd67df37a102,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1552166446123-3, container=default, section=3], offset=362347, length=685083],offset=0,name=d74bc911bfd167fe79d5a3aa780004fd66fa6d,size=685083], StandardFlowFileRecord[uuid=eb979d09-a936-4b2d-82ff-d204f9d768eb,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1552166446123-3, container=default, section=3], offset=1047430, length=361022],offset=0,name=2019-03-09_1541.jpg,size=361022], StandardFlowFileRecord[uuid=343a4c91-b863-440e-ac81-1f68d6210792,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1552166446123-3, container=default, section=3], offset=1408452, length=668],offset=0,name=3026822c780724b39e826230bdef43f8ed9786,size=668], StandardFlowFileRecord[uuid=97df9d3a-dc3c-4d03-b533-7b75c3180032,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1552166446123-3, container=default, section=3], offset=1409120, length=2133417],offset=0,name=abb6feaac5bda3c6d3660e7593cc4ef2e1cfce,size=2133417]] (3.03 MB) to http://hw13125.local:8080/nifi-api in 1416 milliseconds at a rate of 2.14 MB/sec


References