Running Flink SQL Against Kafka Using a Schema Registry Catalog

[FLaNK]:  Running Apache Flink SQL Against Kafka Using a Schema Registry Catalog



There are a few things you can do when you are sending data from Apache NiFi to Apache Kafka to maximize it's availability to Flink SQL queries through the catalogs.


AvroWriter



JSONReader




Producing Kafka Messages


Make sure you set AvroRecordSetWriter and set a Message Key Field.






A great way to work with Flink SQL is to connect to the Cloudera Schema Registry.   It let's you define your schema once them use it in Apache NiFi, Apache Kafka Connect, Apache Spark, Java Microservices 

Setup



Make sure you setup your HDFS directory for use by Flink which keeps history and other important information in HDFS.

HADOOP_USER_NAME=hdfs hdfs dfs -mkdir /user/root

HADOOP_USER_NAME=hdfs hdfs dfs -chown root:root /user/root


SQL-ENV.YAML:

configuration:
execution.target: yarn-session
catalogs:
- name: registry
type: cloudera-registry
# Registry Client standard properties
registry.properties.schema.registry.url: http://edge2ai-1.dim.local:7788/api/v1
# registry.properties.key:
# Registry Client SSL properties
# Kafka Connector properties
connector.properties.bootstrap.servers: edge2ai-1.dim.local:9092
connector.startup-mode: earliest-offset
- name: kudu
type: kudu
kudu.masters: edge2ai-1.dim.local:7051

CLI:

flink-sql-client embedded -e sql-env.yaml


We now have access to Kudu and Schema Registry catalogs of tables.   This let's use start querying, joining and filtering any of these multiple tables without having to recreate or redefine them.


SELECT * FROM events

Code:






Automating the Building, Migration, Backup, Restore and Testing of Streaming Applications

 Automating the Building, Migration, Backup, Restore and Testing of Streaming Applications


One of the main things you will want to add to your flows as you restore them from backup or migrate them between clusters is apply appropriate parameters.

So you can import the parameter contexts and then connect them to the correct process group(s).

nifi-toolkit-1.12.0/bin/cli.sh nifi import-param-context -u http://edge2ai-1.dim.local:8080 -i parameter.json

Note, values can be encrypted so the NiFi Operator or Developer doesn't have to see keys or protected values.


See an example script:

https://github.com/tspannhw/ApacheConAtHome2020/blob/main/scripts/setupnifi.sh

Resources

Monitoring Mac Laptops With Apache NiFi and osquery

 Monitoring Mac Laptops With Apache NiFi and osquery


The other way is pass a SQL query to osquery interpreter (ala osqueryi --json "SELECT * FROM $1") and get the query results back as JSON.

We can tail the main file (/var/log/osquery/osqueryd.results.log) and send the JSON to be used at scale as events.  We can also grab any and all osquery logs like INFO, WARN and ERROR via osquery.+.



Either download or brew cask install.    https://osquery.readthedocs.io/en/2.11.2/installation/install-osx/

I setup a simple configuration here: (https://github.com/tspannhw/nifi-osquery)

{

  "options": {

    "config_plugin": "filesystem",

    "logger_plugin": "filesystem",

    "logger_path": "/var/log/osquery",

    "disable_logging": "false",

    "disable_events": "false",

    "database_path": "/var/osquery/osquery.db",

    "utc": "true"

  },


  "schedule": {

    "system_info": {

      "query": "SELECT hostname, cpu_brand, physical_memory FROM system_info;",

      "interval": 3600

    }

  },


  "decorators": {

    "load": [

      "SELECT uuid AS host_uuid FROM system_info;",

      "SELECT user AS username FROM logged_in_users ORDER BY time DESC LIMIT 1;"

    ]

  },


  "packs": {

       "osquery-monitoring": "/var/osquery/packs/osquery-monitoring.conf",

     "incident-response": "/var/osquery/packs/incident-response.conf",

     "it-compliance": "/var/osquery/packs/it-compliance.conf",

       "osx-attacks": "/var/osquery/packs/osx-attacks.conf",

       "vuln-management": "/var/osquery/packs/vuln-management.conf",

       "hardware-monitoring": "/var/osquery/packs/hardware-monitoring.conf",

     "ossec-rootkit": "/var/osquery/packs/ossec-rootkit.conf"

   }

}



We then turn JSON osquery records into records that can be used for routing, queries, aggregates and ultimately pushing it to Impala/Kudu for rich Cloudera Visual Apps and to Kafka as Schema Aware AVRO to use in Kafka Connect as well as a live continuous query feed to Flink SQL streaming analytic applications.

We could also have osquery push directly to Kafka, but since I am often disconnected from a Kafka server, in offline mode or just want a local buffer for these events lets use Apache NiFi which can run as a single 2GB node on my machine.   I can also do local processing of the data and some local alerting if needed.

Once you have the data from one or million machines you can do log aggregation, anomaly detection, predictive maintenance or whatever else you might need to do.   Sending this data to Cloudera Data Platform in AWS or Azure and having CML and Visual Apps to store, analyze, report, query, build apps, build pipelines and ultimately build production machine learning flows on really makes this a simple example of how to take any data and bring it into a full data platform.

References:

Tracking Satellites with Apache NiFi

 Tracking Satellites with Apache NiFi

Thanks to https://www.n2yo.com/ for awesome data feeds.


Again, these types of ingests are so easy in Apache NiFi.   


Step 1, schedule when we want these.   There is a limit of 1,000 calls per hour, so let's keep it to 4 calls a minute for each of the three REST end points.



Let's get satellite information on right above me.

We set parameters for:   your latitude, your longitude, your apikey and then just change up bits of the REST URL.   Note for this one we are using SSL, so make sure you have an SSL context.





Now we have three streams of JSON data that has lat and long, so we can plot this on a map with Cloudera Visual Apps, storing our data in Impala tables in Kudu.


Some example data:


{

  "info" : {

    "satname" : "SPACE STATION",

    "satid" : 25544,

    "transactionscount" : "5"

  },

  "positions" : [ {

    "satlatitude" : 37.46839338,

    "satlongitude" : 95.12767402,

    "sataltitude" : 422.01,

    "azimuth" : 8.37,

    "elevation" : -49.35,

    "ra" : 290.4714551,

    "dec" : 0.06300703,

    "timestamp" : 1602242926,

    "eclipsed" : false

  }, {

    "satlatitude" : 37.4278686,

    "satlongitude" : 95.18684731,

    "sataltitude" : 422.01,

    "azimuth" : 8.32,

    "elevation" : -49.37,

    "ra" : 290.50535165,

    "dec" : 0.04159856,

    "timestamp" : 1602242927,

    "eclipsed" : false

  } ]

}

Unveiling the NVIDIA Jetson Nano 2GB and Other NVIDIA GTC 2020 Announcements

 Unveiling the NVIDIA Jetson Nano 2GB and Other NVIDIA GTC 2020 Announcements 







NVIDIA Jetson Nano 2GB Press Release

https://nvidianews.nvidia.com/news/nvidia-unveils-jetson-nano-2gb-the-ultimate-ai-and-robotics-starter-kit-for-students-educators-robotics-hobbyists



I have given this one a test run, it has all the features you like for Jetson, with just 2 GB less RAM and 2 less USB ports.   This is a very affordable device to do cool apps.


  • 128-core NVIDIA MaxwellTM 
  • 64-bit Quad-core ARM A57 (1.43 GHz)

  • 2 GB 64-bit LPDDR4 (25.6 GB/s bandwidth)

  • Gigabit Ethernet

  • 1x USB 3.0 Type A ports, 2x USB 2.0 Type A ports, 1x USB 2.0

    Micro-B

  • HDMI

  • WiFi

  • GPIOs, I2C, I2S, SPI, PWM, UART

  • 1x MIPI CSI-2 connector

  • MicroSD Connector

  • 12-pin header (Power and related signals, UART)

  • 100mm x 80mm x 29mm

  • USB-C Port for Power

Depending where you or or how you buy the package you may need to buy a power supply and USB WiFi.

All of my existing workloads have been working fine in the 2GB version, but with a very nice cost saving.  The setup is easy, the system is fast, I have to highly recommend anyone looking for a quick way to do Edge AI and other edge workloads a try.   This could be a decent machine for learning.   I hooked mine up to a monitor, keyboard and mouse and I could use it right away for edge development and also as a basic desktop.   Nice work!  I might need to get 11 more of these.   These will run MiNiFi agents, Python and Deep Learning classifications at ease.

NVIDIA didn't stop with the ultimate low-cost edge device, they have some serious enterprise updates as well:

Cloudera superchargers the Enterprise Data Cloud with NVIDIA

https://blog.cloudera.com/cloudera-supercharges-the-enterprise-data-cloud-with-nvidia/

There seems to be a ton more news coming at this virtual event, so I recommend attending and watching for more detailed posts on new things coming out.

Product page: 

https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/education-projects/


Unboxing video:

https://youtu.be/dVGEtWYkP2c


NVIDIA Jetson Developer Community AI Projects: 

https://youtu.be/2T8CG7lDkcU


Open-source projects on Jetson Nano 2GB: 

https://youtu.be/fIESu365Sb0


Dev Blog:

https://developer.nvidia.com/blog/ultimate-starter-ai-computer-jetson-nano-2gb-developer-kit/



DevOps: Working with Parameter Contexts in Apache NiFi 1.11.4+

 DevOps:  Working with Parameter Contexts in Apache NiFi 1.11.4+

nifi list-param-contexts -u http://localhost:8080 -ot simple


#   Id                                     Name             Description   

-   ------------------------------------   --------------   -----------   

1   3a801ff4-1f73-1836-b59c-b9fbc79ab030   backupregistry                 

2   7184b9f4-0171-1000-4627-967e118f3037   health                         

3   3a801faf-1f87-1836-54ba-3d913fa223ad   retail                         

4   3a801fde-1f73-1836-957b-a9f4d2c9b73d   sensors                        


#> nifi export-param-context -u http://localhost:8080 -verbose --paramContextId 3a801faf-1f87-1836-54ba-3d913fa223ad


{

  "name" : "retail",

  "description" : "",

  "parameters" : [ {

    "parameter" : {

      "name" : "allquery",

      "description" : "",

      "sensitive" : false,

      "value" : "SELECT * FROM FLOWFILE"

    }

  }, {

    "parameter" : {

      "name" : "allrecordssql",

      "description" : "",

      "sensitive" : false,

      "value" : "SELECT * FROM FLOWFILE"

    }

  }, {

    "parameter" : {

      "name" : "energytopic",

      "description" : "",

      "sensitive" : false,

      "value" : "energy"

    }

  }, {

    "parameter" : {

      "name" : "importantsql",

      "description" : "",

      "sensitive" : false,

      "value" : "SELECT * FROM FLOWFILE\nWHERE kernel_logs like '%SIGKILL%'"

    }

  }, {

    "parameter" : {

      "name" : "itempricetable",

      "description" : "",

      "sensitive" : false,

      "value" : "impala::default.itemprice"

    }

  }, {

    "parameter" : {

      "name" : "itsgettingHotInHere",

      "description" : "",

      "sensitive" : false,

      "value" : "SELECT * FROM\nFLOWFILE\nWHERE CAST (temp_f as DOUBLE) > 80\nAND UPPER(location) LIKE '%NJ%'"

    }

  },





You can now move that to another server and import. nifi import-param-context.


 bin/cli.sh nifi list-param-contexts -u http://localhost:8080 -ot json

Use simple for a simple table list.

 bin/cli.sh nifi export-param-context -u http://localhost:8080  --paramContextId 3a801ff4-1f73-1836-b59c-b9fbc79ab030 -ot json -o backupregistry.json

Example Shell Script

/Users/tspann/Downloads/nifi-toolkit-1.12.0/bin/cli.sh nifi export-param-context -u http://localhost:8080  --paramContextId a13e3764-134c-16f0-7c35-312b7ee4b182 -ot json -o financial.json
/Users/tspann/Downloads/nifi-toolkit-1.12.0/bin/cli.sh nifi export-param-context -u http://localhost:8080  --paramContextId 7184b9f4-0171-1000-4627-967e118f3037 -ot json -o health.json
/Users/tspann/Downloads/nifi-toolkit-1.12.0/bin/cli.sh nifi export-param-context -u http://localhost:8080  --paramContextId 3a801faf-1f87-1836-54ba-3d913fa223ad -ot json -o retail.json
/Users/tspann/Downloads/nifi-toolkit-1.12.0/bin/cli.sh nifi export-param-context -u http://localhost:8080  --paramContextId 3a801fde-1f73-1836-957b-a9f4d2c9b73d -ot json -o  sensors.json
/Users/tspann/Downloads/nifi-toolkit-1.12.0/bin/cli.sh nifi export-param-context -u http://localhost:8080  --paramContextId 3a801ff4-1f73-1836-b59c-b9fbc79ab030 -ot json -o backupregistry.json

Reference

http://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#nifi_CLI


https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.5.1/versioning-a-dataflow/content/parameters-in-versioned-flows.html


Using Google Forms As a A Data Source for NiFi Flows


Setup a Google Developers Account

'


Use or Create an API Key For Sheets at Developer Console


For Your Google Sheet (If not OAuth, You Need to Make it Visible via URL)

Or you will face PERMISSION_DENIED


Enable Google Sheets API

https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project=YOURPROJECTID


View Metrics

https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project=YOURPROJECTISCOOL

Access The Data Via NIFI

https://sheets.googleapis.com/v4/spreadsheets/YOURGOOGLESHEET?includeGridData=true&key=YOURKEY

References:

https://community.cloudera.com/t5/Community-Articles/Streaming-Ingest-of-Google-Sheets-with-HDF-2-0/ta-p/247764


Using DJL.AI For Deep Learning BERT Q&A in NiFi DataFlows

 

Using DJL.AI For Deep Learning BERT Q&A in NiFi DataFlows


Introduction:

I will be talking about this processor at Apache Con @ Home 2020 in my "Apache Deep Learning 301" talk with Dr. Ian Brooks.

Sometimes you want your Deep Learning Easy and in Java, so let's do that with DJL in a custom Apache NiFi processor running in CDP Data Hubs.   This one does BERT QA.


To use the processor feed in a paragraph to analyze via the paragraph parameter in the NiFi processor.   Also feed in a question, like Why? or something very specific like asking the date or an event.


The pretrained model is BERT QA model using PyTorch. the NiFi Processor Source:

https://github.com/tspannhw/nifi-djlqa-processor


Grab the Recent Release NAR to install to your NiFi lib directories:

https://github.com/tspannhw/nifi-djlqa-processor/releases/tag/1.2


Example Run





Demo Data Source

https://newsapi.org/v2/everything?q=cloudera&apiKey=REGISTERFORAKEY



Reference:



Deep Learning Note:   

BERT QA Model


Tip


Make sure you have 1-2 GB of RAM extra for your NiFi instance for running each DJL processor.   If you have a lot of text, run more nodes and/or RAM.   Make sure you have at least 8 cores per Deep Learning process.   I prefer JDK 11 for this.


See Also:   https://www.datainmotion.dev/2019/12/easy-deep-learning-in-apache-nifi-with.html



Using DJL.AI For Deep Learning Based Sentiment Analysis in NiFi DataFlow

Using DJL.AI For Deep Learning Based Sentiment Analysis in NiFi DataFlow 


Introduction:

I will be talking about this processor at Apache Con @ Home 2020 in my "Apache Deep Learning 301" talk with Dr. Ian Brooks.

Sometimes you want your Deep Learning Easy and in Java, so let's do that with DJL in a custom Apache NiFi processor running in CDP Data Hubs.

Grab the Source:

https://github.com/tspannhw/nifi-djlsentimentanalysis-processor

Grab the Recent Release NAR to install to your NiFi lib directories:

https://github.com/tspannhw/nifi-djlsentimentanalysis-processor/releases/tag/1.2

Example Run

probnegative
0.99
No value set
probnegativeperc
99.44
No value set
probpositive
0.01
No value set
probpositiveperc
0.56
No value set
rawclassification
[class: "Negative", probability: 0.99440, class: "Positive", probability: 0.00559]

Demo Data Source

https://newsapi.org/v2/everything?q=cloudera&apiKey=REGISTERFORAKEY



Reference:


Deep Learning Note:   

The pretrained model is DistilBERT model trained by HuggingFace using PyTorch.


Tip


Make sure you have 1-2 GB of RAM extra for your NiFi instance for running each DJL processor.   If you have a lot of text, run more nodes and/or RAM.   Make sure you have at least 8 cores per Deep Learning process.   I prefer JDK 11 for this.


See Also:   https://www.datainmotion.dev/2019/12/easy-deep-learning-in-apache-nifi-with.html



Cloudera Streams Messaging Manager Swagger Docs (For Kafka Monitoring, Management, Kafka Connect)

Cloudera Streams Messaging Manager Swagger Docs (For Kafka Monitoring, Management, Kafka Connect)



Note that the port is 8585 and not the SMM port which is often 9991.

YOURSERVER:8585/swagger

See:

https://docs.cloudera.com/smm/2.0.0/rest-api-reference/index.html#/Application_context_related_operations