One Minute NiFi Tip: Calcite SQL Notes

NiFi Quick Tip on SQL


You sometimes have to cast, as fields aren't what you think they are.   I have some temperatures that are stored as string, yeah I know let's yell at who did that.   Maybe it was some lazy developer (Me?~??~?~?!!!).    Let's just cast to a type that makes sense for math and comparisons.   CAST is my friend.

SELECT * 
FROM FLOWFILE
WHERE CAST(temperaturef as FLOAT) > 60


Apache NiFi (and lots of other awesome projects) use Apache Calcite for queries.   So if you need some SQL help, always look here:   https://calcite.apache.org/docs/reference.html

You can also include variables in your QueryRecord queries.

SELECT * 
FROM FLOWFILE
WHERE CAST(temperaturef as FLOAT) >= (CAST(${predictedTemperature} as FLOAT) - 5)


There are wildcard characters that you may need to watch.

Underscore has special meaning.  Also there often column names that are reserved words.   I got a lot of columns coming from IoT often with names like timestamp, start, end and other ones used by SQL.   Just put a `start` around it.

Watch those wildcards.

select * from flowfile where internal = false
and name not like '@_@_%' ESCAPE '@'

FLaNK: Low Code Streaming: Populating Kafka Topics with FlinkSQL Joins in Real-Time


FLaNK:  Low Code Streaming:  Populating Kafka Topics with FlinkSQL Joins in Real-Time 







FLaNK




Then I can create my 3 tables.   Two are the source ones to join and the third is the destination for my insert.



INSERT INTO global_sensor_events 
SELECT 
 scada.uuid, 
 scada.systemtime ,  
scada.temperaturef , 
scada.pressure , 
scada.humidity , 
scada.lux , 
scada.proximity , 
scada.oxidising , 
scada.reducing , 
scada.nh3 , 
scada.gasko,
energy.`current`, 
energy.voltage ,
energy.`power` ,
energy.`total`,
energy.fanstatus

FROM energy,
     scada
WHERE
    scada.systemtime = energy.systemtime;

Examples

Assets / Scripts / DDL / SQL

Flink Guide to SQL Joins
https://www.youtube.com/watch?v=5AuBlVRKQuo

Slides

Article on Joins

Resources

Time Series Analysis - Dataflow






In a first, we joined together for the forces of NYC, New Jersey and Philly to power this meetup.   A huge thanks to John Kuchmek, Amol Thacker and Paul Vidal for promoting and cross running a sweet meetup.   John was an amazing meetup lead and made sure we kept moving.  A giant thanks to Cloudera marketing for helping with logistics and some awesome giveaways!   Hopefully next year's we can do a Cinco De Mayo Taco Feast!  Bill Brooks and Robert Hryniewicz were great help!   And thanks for Cloudera for providing CDP Public Cloud on AWS and CDP-DC on OpenStack for demos, development and general data fun.   And thanks for the initial meetup suggestion and speaker to Bethann Noble and her awesome machine learning people.



Philly - NJ - NYC


To quote, John Kuchmek:

The Internet of Things (IoT) is growing in popularity but it isn’t new. Connected devices have existed in manufacturing and utilities with Supervisory Control and Data Acquisition (SCADA) systems. Time series data has been looked at for sometime in these industries as well as the stock market. Time series analysis can bring valuable insight to businesses and individuals with smart homes. There are many parts and components to be able to collect data at the edge, store in a central location for initial analysis, model build, train and eventually deploy. Time series forecasting is one of the more challenging problems to solve in data science. Important factors in time series analysis and forecasting are seasonality, stationary nature of data and autocorrelation of target variables. We show you a platform, built on open source technology, that has this potential. Sensor data will be collected at the edge, off a Raspberry Pi, using Cloudera’s Edge Flow Manager (powered by MiNiFi). The data will then be pushed to a cluster containing Cloudera Flow Manager (powered by NiFi) so it can be manipulated, routed, and then be stored in Kudu on Cloudera’s Data Platform. Initial inspection can be done in Hue using Impala. The time series data will be analyzed with potential forecasting using an ARIMA model in CML (Cloudera Machine Learning). Time series analysis and forecasting can be applied to but not limited to stock market analysis, forecasting electricity loads, inventory studies, weather conditions, census analysis and sales forecasting.


The main portion of our meetup was an amazing talk by Data Scientist - Victor Dibia.

Analyzing Time Series Data with an ARIMA model


His talk comes right after mine and is about an hour of in-depth Data Science with many hard questions answered.   Also a cool demo.   Thanks again Victor.

We also had some really great attendees who asked some tough question.  My favorite question was by a Flink expert who joined from the West Coast who asked for a FLaNK sticker.



Time Series Analysis - Dataflow

For my small part I did a demo of ingesting data from MiNiFi to NiFi to CML and Kafka.   Flink reads from two Kafka topics, joins them and inserts into a third Kafka topic.   We call the ML model for classification as part of our ingest flow.   This is an example of my FLaNK Stack.

MiNiFi sends the data it reads from sensors and a camera and sends them to a local NiFi gateway.   That NiFi gateway sends a stream to my CDP hosted CFM NiFi cluster for processing.  This cluster splits the data based on which set of sensors (energy or scada) and then publishes to Kafka topics and populates Kudu tables with an UPSERT.




We have great options for monitoring, querying and analyzing our data with the tools from CDP and CDP-DC.   These include Cloudera DAS, Apache Hue, Cloudera SMM for Kafka, Flink SQL console, Flink Dashboard, CML Notebooks, Jupyter Notebooks from CML and Apache Zeppelin.















As a separate way to investigate Kafka, I have created a Hive external table in beeline and connected that to a Kafka topic.  I can know query the current state of that topic.







Video Walkthrough of FlinkSQL Application (and awesome Machine Learning Talk on Time Series)



Slides From Talk


Related Articles

Flink SQL Preview

FLaNK:  Flink SQL Preview







From our Web Flink Dashboard, we can see how our insert is doing and view the joins and records passing quickly through our tiny cluster.










As part of the May 7th, 2020 Virtual Meetup, I was doing some work with Flink SQL to show for a quick demo as the introduction to the meetup and I found out how easy it was to do some cool stuff.   This was inspired by my Streaming Hero, Abdelkrim, who wrote this amazing article on Flink SQL use cases:   https://towardsdatascience.com/event-driven-supply-chain-for-crisis-with-flinksql-be80cb3ad4f9

As part of our time series meetup, I have a few streams of data coming from one device from a MiNiFi Java agent to NiFi for some transformation, routing and processing and then sent to Apache Flink for final processing.   I decided to join Kafka topics with Flink SQL.   


Let's create Flink Tables:

This table will be used to insert the joined events from both source Kafka topics.

CREATE TABLE global_sensor_events (
 uuid STRING, 
systemtime STRING ,  
temperaturef STRING , 
pressure DOUBLE, 
humidity DOUBLE, 
lux DOUBLE, 
proximity int, 
oxidising DOUBLE , 
reducing DOUBLE, 
nh3 DOUBLE , 
gasko STRING,
`current` INT, 
voltage INT ,
`power` INT,
`total` INT,
fanstatus STRING
) WITH (
'connector.type'    = 'kafka',
'connector.version' = 'universal',
'connector.topic'    = 'global_sensor_events',
'connector.startup-mode' = 'earliest-offset',
'connector.properties.bootstrap.servers' = 'tspann-princeton0-cluster-0.general.fuse.l42.cloudera.com:9092',
'connector.properties.group.id' = 'flink-sql-global-sensor_join',
'format.type' = 'json'
);


This table will hold Kafka topic messages from our energy reader.

CREATE TABLE energy (
uuid STRING, 
systemtime STRING,  
        `current` INT, 
voltage INT, 
`power` INT, 
`total` INT, 
swver STRING, 
hwver STRING,
type STRING, 
model STRING, 
mac STRING, 
deviceId STRING, 
hwId STRING, 
fwId STRING, 
oemId STRING,
alias STRING, 
devname STRING, 
iconhash STRING, 
relaystate INT, 
ontime INT, 
activemode STRING, 
feature STRING, 
updating INT, 
rssi INT, 
ledoff INT, 
latitude INT, 
longitude INT, 
`day` INT, 
`index` INT, 
zonestr STRING, 
tzstr STRING, 
dstoffset INT, 
host STRING, 
currentconsumption INT, 
devicetime STRING, 
ledon STRING, 
fanstatus STRING, 
`end` STRING, 
te STRING, 
cpu INT, 
memory INT, 
diskusage STRING
) WITH (
'connector.type'    = 'kafka',
'connector.version' = 'universal',
'connector.topic'    = 'energy',
'connector.startup-mode' = 'earliest-offset',
'connector.properties.bootstrap.servers' = 'tspann-princeton0-cluster-0.general.fuse.l42.cloudera.com:9092',
'connector.properties.group.id' = 'flink-sql-energy-consumer',
'format.type' = 'json'
);


The scada table holds events from our sensors.

CREATE TABLE scada (
uuid STRING, 
systemtime STRING,  
amplitude100 DOUBLE, 
        amplitude500 DOUBLE, 
amplitude1000 DOUBLE, 
lownoise DOUBLE, 
midnoise DOUBLE,
        highnoise DOUBLE, 
amps DOUBLE, 
ipaddress STRING, 
host STRING, 
host_name STRING,
        macaddress STRING, 
endtime STRING, 
runtime STRING, 
starttime STRING, 
        cpu DOUBLE, 
cpu_temp STRING, 
diskusage STRING, 
memory DOUBLE, 
id STRING, 
temperature STRING, 
adjtemp STRING, 
adjtempf STRING, 
temperaturef STRING, 
pressure DOUBLE, 
humidity DOUBLE, 
lux DOUBLE, 
proximity INT, 
oxidising DOUBLE, 
reducing DOUBLE, 
nh3 DOUBLE, 
gasko STRING
) WITH (
'connector.type'    = 'kafka',
'connector.version' = 'universal',
'connector.topic'    = 'scada',
'connector.startup-mode' = 'earliest-offset',
'connector.properties.bootstrap.servers' = 'tspann-princeton0-cluster-0.general.fuse.l42.cloudera.com:9092',
'connector.properties.group.id' = 'flink-sql-scada-consumer',
'format.type' = 'json'
);


This is the magic part:

INSERT INTO global_sensor_events 
SELECT 
scada.uuid, 
scada.systemtime ,  
scada.temperaturef , 
scada.pressure , 
scada.humidity , 
scada.lux , 
scada.proximity , 
scada.oxidising , 
scada.reducing , 
scada.nh3 , 
scada.gasko,
energy.`current`, 
energy.voltage ,
energy.`power` ,
energy.`total`,
energy.fanstatus

FROM energy,
     scada
WHERE
    scada.systemtime = energy.systemtime;

So we join two Kafka topics and use some of their fields to populate a third Kafka topic that we defined above.

With Cloudera, it is so easy to monitor our streaming Kafka events with SMM.


For context, this is where the data comes from:



Streaming Data with Cloudera Data Flow (CDF) Into Public Cloud (CDP)

Streaming Data with Cloudera Data Flow (CDF) Into Public Cloud (CDP)






At Cloudera Now NYC, I showed a demo on streaming data from MQTT Sensors and Twitter that was running in AWS.   Today I am going to walk you through some of the details and give you the tools to build your own streaming demos in CDP Public Cloud.   If you missed that event, you can watch a recording here.



Let's get streaming!



Let's login, I use Okta for Single-Sign On (SSO) which makes this so easy.  Cloudera Flow Management (CFM) Apache NiFi is officially available in the CDP Public Cloud.   So get started here.   We will be following the guide (https://docs.cloudera.com/cdf-datahub/7.1.0/howto-data-ingest.html).   We are running CDF DataHub on CDP 7.1.0.

There's a lot of data engineering and streaming tasks I can accomplish with few clicks.   I can bring up a virtual datawarehouse and use tools like Apache Hue and Data Analytics Studio to examine database and tables and run queries.



We go to Data Hub Clusters and can see the latest Apache NiFi in there.   You can see we have Data Engineering, Kafka and NiFi clusters already built and ready to go.   It only takes a click, a few drop down settings and a name to build and deploy in minutes.   This saves me and my team so much time.   Thanks Cloud Team!



Kafka and NiFi Data Hub Clusters


Provision a New Data Hub - Op Db


Provision a New Data Hub - NiFi



Once build, the Kafka Data Hub is our launching place for Cloudera Manager, Schema Registry and SMM.



Provision a New Data Hub - Real-Time Data Mart



Data Engineering on AWS Details


Display Environments For Your Clouds



From the DataHub cluster that we built for CFM - Apache NiFi or for Apache Kafka I can access Cloudera Manager to do monitoring, management and other tasks that Cloudera administrators are use to like searching logs.


Let's jump into the Apache NiFi UI from CDP Data Hub.


Once I've logged into Flow Management, I can as an administrator see some of the server monitoring, metrics and administrative features and areas.





Our module for Twitter ingest on CDP Data Hub.





We can download our flow immediately and quickly sent our code to version control.




We consume MQTT messages sent from my IoT gateway that is pushing messages from multiple devices via MQTT.



Using parameters that can be set via DevOps or via Apache NiFi, we setup a reusable component to read from any MQTT broker.   Username, password, broker uri and topic are parameters that we set and can change based on any use needed.





Ingesting from Twitter is just as easy as reading from MQTT.


We can also parameterize our Twitter ingest for easy reuse.  For this twitter ingest, we have some sensitive values that are protected as well as some query terms for twitter to limit our data to airline data.




Editing parameters from the NiFi UI is super easy.





All the data passing through the nodes of my cluster.



Apache NiFi has a ton of tabs for visualizing any of the metrics of interest.

















We are setting a JSON reader for inferring any JSON data.



To write to Kafka, we have some parameters for brokers and a reader/writer for records.  We use the prebuilt "Default NiFi SSL Context Service" for SSL security.   We also need to specify:  SASL_SSL, PLAIN, your username for CDP, your password for CDP.




On a local edge server, We are publishing sensor data to MQTT.




PutHDFS Configuration (Store GZIPPED JSON Files)


Put To Hive Streaming Table


PutORC Files to CNOW3, autoconverted JSON to ORC






When we push to PutORC it will build us the DDL for an external table automatically, just grab it from data provenance.




For storing to Apache Hive 3 tables, we have to set some parameters for Hive Configuration and the metastore from our data store.





In Apache NiFi, Ranger controls policies for permissions to NiFi.  CDP creates one for NiFi Administrators which I am am member.

Version Control is preconfigured for CDP Data Hub NiFi users with the same single sign on.   Apache NiFi Registry will have all our modules and their versions.


Before we push NiFi changes to version control, you get a list of changes you made.



We can see data as it travels through Apache NiFi in it's built-in data provenance (lineage).


Let's check out our new data in Amazon S3.



We want to look at our data in Kafka, so the we can use Cloudera Streams Messaging Manager (SMM) to view, edit, monitor and manage everything Kafka.



We can build alerts for any piece of the Kafka infrastructure (broker, topics, etc...)


I want to look at the lineage, provenance and metadata for my flow from data birth to storage.   Atlas is easy to use and integrated with CDP.   Thanks to the automagic configuration done in Cloudera Enterprise Data Cloud - NiFi, Kafka, HDFS, S3, Hive, HBase and more are providing data that comes together in one easy to follow diagram powered by Graphs.



The connection to Atlas is prebuilt for you in Apache NiFi, you can take a look and see.









Using Apache Hue, I can search our tables and produce simple charts.



We push our merged ORC files to /tmp/cnow3 directory in S3 controlled by HDFS and full security for an external Hive table.




It becomes trivial to push data to S3, whether it's compressed JSON files or internal ORC files used in Hive 3 tables.




As part of our output we push sensor readings to Slack for a sampling of current device status.




We can quickly access Cloudera SMM from CDP Data Hub with a single click thanks to Single Sign On.   Once in SMM, we can build alerts based on conditions within clusters, brokers, topics, consumers, producers, replication and more.


We can look at topics and see alerts in one UI.




We can view our alert messages from our history display.


After alerts are triggered, we can see a history of them in the UI for a single pane of glass view of the system.


The Brokers screen shows me totals for all the brokers and individual broker data.


I can browse inside a topic like this one for our sensors data.   I can view the key, offset, timestamp and data.   I can view text, byte, json and AVRO formatted data.   There is also a connection to the schema it used from the Cloudera Schema Registry.



Below is an example email sent via Cloudera SMM for an alert condition on Kafka.


Before we can query the ORC files that we have stored in HDFS, we'll need to create an external Hive table.


We can use Apache Hue or Data Analytics Studio to query our new table.



If you need to connect to a machine, you can SSH into an instance.  


If you need more information, join us in the Cloud, in the Community or up close in virtual Meetups.

Additional Resources