Apache NiFi + Deep Speech

Deep Speech with Apache NiFi 1.8
Tools: Python 3.6, PyAudio, TensorFlow, Deep Speech, Shell, Apache NiFi
Why: Speech-to-Text
Use Case: Voice control and recognition.
Series: Holiday Use Case: Turn on Holiday Lights and Music on command.
Cool Factor: Ever want to run a query on Live Ingested Voice Commands?
We are using Python 3.6 to write some code around pyaudio, tensorflow and Deep Speech to capture audio, store it in a wave file and then process it with Deep Speech to extract some text. This example is running in OSX without a GPU on Tensorflow v1.11.
The Mozilla Github repo for their Deep Speech implementation has nice getting started information that I used to integrate our flow with Apache NiFi.

  1. pip3 install deepspeech

  2. wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.3.0/deepspeech-0.3.0-models.tar.gz | tar xvfz -


This pre-trained model is available for English. For other languages, you will need to build your own. You can use a beef HDP 3.1 cluster to train this. Note: THIS IS A 1.8 GIG DOWNLOAD. That may be an issue for laptops, devices or small data people.
Apache NiFi Flow
The flow is simple, we call our shell script that runs Python that records audio and sends it to Deep Speech for processing.
We get back a voice_string in JSON that we turn into a record for querying and filtering in Apache NiFi.
I am handling a few voice commands for "Save", "Load" and "Move". As you can imagine you can handle pretty much anything you want. It's a simple way to use voice to control streaming data flows or just to ingest large streams of text. Even using advanced Deep Learning, text recognition is still not the strongest.
If you are going to load balance connections between nodes, you have options on compression and load balancing strategies. This can come in handy if you have a lot of servers.
Shell Script

  1. python3.6 /Volumes/TSPANN/projects/DeepSpeech/processnifi.py /Volumes/TSPANN/projects/DeepSpeech/models/output_graph.pbmm /Volumes/TSPANN/projects/DeepSpeech/models/alphabet.txt


Schema

  1. {

  2. "type" : "record",

  3. "name" : "voice",

  4. "fields" : [ {

  5. "name" : "systemtime",

  6. "type" : "string",

  7. "doc" : "Type inferred from '\"12/10/2018 14:53:47\"'"

  8. }, {

  9. "name" : "voice_string",

  10. "type" : "string",

  11. "doc" : "Type inferred from '\"\"'"

  12. } ]

  13. }


We can add more fields as needed.
Example Run

  1. HW13125:DeepSpeech tspann$ ./runnifi.sh

  2. TensorFlow: v1.11.0-9-g97d851f04e

  3. DeepSpeech: unknown

  4. 2018-12-10 14:36:43.714433: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

  5. {"systemtime": "12/10/2018 14:36:43", "voice_string": "one two three or five six seven eight nine"}


We can run this on top of YARN 3.1 as dockerized or non-dockerized workloads.
Setting up nodes to run HDF 3.3 - Apache NiFi and friends is easy in the cloud or on-premise in OpenStack with super devops tools.
When running Apache NiFi it is easy to monitor in Ambari:
References: