Skip to main content

Smart Stocks with FLaNK (NiFi, Kafka, Flink SQL)

Smart Stocks with FLaNK (NiFi, Kafka, Flink SQL)

I would like to track stocks from IBM and Cloudera frequently during the day using Apache NiFi to read the REST API.   After that I have some Streaming Analytics to perform with Apache Flink SQL and I also want permanent fast storage in Apache Kudu queried with Apache Impala.

Let's build that application cloud native in seconds in AWS or Azure.

To Script Loading Schemas, Tables, Alerts see scripts/

  • Kafka Topic
  • Kafka Schema
  • Kudu Table
  • Flink Prep
  • Flink SQL Client Run
  • Flink SQL Client Configuration
Once our automated admin has built our cloud environment and populated it with the goodness of our app, we can being out continuous sql.

If you know your data, build a schema, share to the registry

One unique thing we added was a default value in our Avro schema and making it a logicalType for timestamp-millis.  This is helpful for Flink SQL timestamp related queries.

{ "name" : "dt", "type" : ["long"], "default": 1, "logicalType": "timestamp-millis"}

You can see the entire schema here:

We will also want a topic for Stock Alerts that we will create later with Flink SQL, so let's define a schema for that as well.

For our data today we will use AVRO data with AVRO schemas for use inside Kafka topics and whoever will consume it.

How to Build a Smart Stock DataFlow in X Easy Steps

  1. Retrieve data from source (example:   InvokeHTTP against SSL REST Feed - say TwelveData) with a schedule.
  2. Set a Schema Name (UpdateAttribute)
  3. ForkRecord:  We use this to split out records from the header (/values) using RecordPath syntax.
  4. QueryRecord:  Convert type and manipulate data with SQL.   We aren't doing anything in this one, but this is an option to change fields, add fields, etc...
  5. UpdateRecord:  This first one I am setting some fields in the record from attributes and adding a current timestamp.   I also reformate by timestamp for conversion.
  6. UpdateRecord:   I am making dt make numeric UNIX timestamp.
  7. UpdateRecord:  I am making datetime my formatted String date time.
  8. (LookupRecord):  I don't have this step yet as I don't have an internal record for this company in my Real-Time Data Mart.  I will probably add this step to augment or check my data.
  9. (ValidateRecord):   For less reliable data sources, I may want to validate my data against our schema, otherwise we will get warnings or errors.
  10. PublishKafkaRecord_2_0:   Convert from JSON to AVRO, send to our Kafka topic with headers including reference to the correct schema stocks and it's version 1.0.

Now that we are streaming our data to Kafka topics, we can utilize it in Flink SQL Continuous SQL applications, NiFi applications, Spark 3 applications and more.   So in this case CFM NiFi is our Producer and we will have CFM NiFi and CSA Flink SQL as Kafka Consumers.

We can see what our data looks like in the new cleaned up format with all the fields we need.

Viewing, Monitoring, Checking and Alerting On Our Streaming Data in Kafka

Cloudera Streams Messaging Manager solves all of these difficult problems from one easy to use pre integrated UI.   It is pre-wired into my Kafka Datahubs and secured with SDX.

I can see my AVRO data with associated stocks schema is in the topic, ready to be consumed.  I can then monitor who is consuming, how much and if there is a lag or latency.

How to Store Our Streaming Data to Our Real-Time DataMart in the Cloud

Consume stocks AVRO data with stocks schema then write to our Real-Time Data Mart in Cloudera Data Platform powered by Apache Impala and Apache Kudu.   If something failed or could not connect, let's retry three times.

We use a parameter for our 3+ Kafka brokers with port.   We could also have parameters for topic names and consumer name.   We read from stocks table which uses stocks schema that is referenced in Kafka header automatically ready by NiFi.  When we sent message to Kafka, nifi passed on our schema name via attribute in NiFi.   As we can see it was schema attached Avro, so we use that Reader and convert to simple JSON with that schema.

Writing to our Cloud Native Real-Time Data Mart could not be simpler, we reference the table stocks we have created and have permissions to and use the JSON reader.   I like UPSERT since it handles INSERT AND UPDATE.

First we need to create our Kudu table in either Apache Hue from CDP or from the command line scripted.   Example:  impala-shell -i edge2ai-1.dim.local -d default -f  /opt/demo/sql/kudu.sql 
  uuid STRING,
  `datetime` STRING,
  `symbol` STRING, 
  `open` STRING, 
  `close` STRING,
  `high` STRING,
  `volume` STRING,
  `low` STRING,
PRIMARY KEY (uuid,`datetime`) ) 
STORED AS KUDU TBLPROPERTIES ('kudu.num_tablet_replicas' = '1');

Using Apache Hue integrated in CDP, I can examine my Real-Time Data Mart table and then query my table.

My data is now ready for reports, dashboards, applications, notebooks, web applications, mobile apps and machine learning.

I can now spin up a Cloudera Visual Application on this table in a few seconds.

Now we can build our streaming analytics application in Flink.

How to Build a Smart Stock Streaming Analytics in X Easy Steps

I can connect to Flink SQL from the command line Flink SQL Client to start exploring my Kafka and Kudu data, create temporary tables and launch some applications (insert statements). The environment lets me see all the different catalogs available including registry (Cloudera Cloud Schema Registry), hive (Cloud Native Database table) and kudu (Cloudera Real-Time Cloud Data Mart) tables.

Run Flink SQL Client

It's a two step process, first setup a yarn session.   You may need to add your Kerberos credentials.

flink-yarn-session -tm 2048 -s 2 -d

Then launch the command line SQL Client.

flink-sql-client embedded -e sql-env.yaml


Run Flink SQL

Cross Catalog Query to Stocks Kafka Topic

select * from registry.default_database.stocks;

Cross Catalog Query to Stocks Kudu/Impala Table

select * from kudu.default_database.impala::default.stocks;

Default Catalog

use catalog default_catalog;

CREATE TABLE stockEvents ( symbol STRING, uuid STRING, ts BIGINT, dt BIGINT, datetime STRING, open STRING, close STRING, high STRING, volume STRING, low STRING, 
event_time AS CAST(from_unixtime(floor(ts/1000)) AS TIMESTAMP(3)), 
WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND ) 
WITH ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 
'connector.topic' = 'stocks', 
'connector.startup-mode' = 'earliest-offset', 
'' = 'edge2ai-1.dim.local:9092', 
'format.type' = 'registry', 
'' = 'http://edge2ai-1.dim.local:7788/api/v1' );

show tables;

Flink SQL> describe stockEvents; 

root |-- symbol: STRING |-- uuid: STRING |-- ts: BIGINT |-- dt: BIGINT |-- datetime: STRING |-- open: STRING |-- close: STRING |-- high: STRING |-- volume: STRING |-- low: STRING |-- event_time: TIMESTAMP(3) AS CAST(FROM_UNIXTIME(FLOOR(ts / 1000)) AS TIMESTAMP(3)) |-- WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND

We added a watermark and event time pulled from our timestamp.

Simple Select All Query

select * from default_catalog.default_database.stockEvents;

We can do some interesting queries against this table we created.

Tumbling Window

SELECT symbol, 
TUMBLE_START(event_time, INTERVAL '1' MINUTE) as tumbleStart, 
TUMBLE_END(event_time, INTERVAL '1' MINUTE) as tumbleEnd, 
AVG(CAST(high as DOUBLE)) as avgHigh 
FROM stockEvents 
WHERE symbol is not null 
GROUP BY TUMBLE(event_time, INTERVAL '1' MINUTE), symbol;

Top 3

( PARTITION BY window_start ORDER BY num_stocks desc ) AS rownum 
SELECT TUMBLE_START(event_time, INTERVAL '10' MINUTE) AS window_start, 
COUNT(*) AS num_stocks 
FROM stockEvents 
GROUP BY symbol, 
TUMBLE(event_time, INTERVAL '10' MINUTE) ) ) 
WHERE rownum <=3;

Stock Alerts

INSERT INTO stockalerts 
/*+ OPTIONS('sink.partitioner'='round-robin') */ 
SELECT CAST(symbol as STRING) symbol, 
CAST(uuid as STRING) uuid, ts, dt, open, close, high, volume, low, 
datetime, 'new-high' message, 'nh' alertcode, CAST(CURRENT_TIMESTAMP AS BIGINT) alerttime FROM stocks st 
WHERE symbol is not null 
AND symbol <> 'null' 
AND trim(symbol) <> '' 
AND CAST(close as DOUBLE) > 11;

Monitoring Flink Jobs

Using the CSA Flink Global Dashboard, I can see all my Flink jobs runninging including SQL Client jobs, disconnected Flink SQL inserts and deployed Flink applications.

We can also see the data populated in the stockalerts topic.  We can run a Flink SQL, Spark 3, NiFi or other applications against this data to handle alerts.   That may be the next application, I may send those alerts to iphone messages, Slack messages, a database table and a websockets app.

Data Lineage and Governance

We all know that NiFi has deep data lineage that can be pushed or pulled via REST, Reporting Tasks or CLI to use in audits, metrics and tracking.   If I want to all the governance data for my entire streaming pipeline I will use Apache Atlas that is prewired as part of SDX in my Cloud Data Platform.


Popular posts from this blog

Ingesting Drone Data From DJII Ryze Tello Drones Part 1 - Setup and Practice

Ingesting Drone Data From DJII Ryze Tello Drones Part 1 - Setup and Practice In Part 1, we will setup our drone, our communication environment, capture the data and do initial analysis. We will eventually grab live video stream for object detection, real-time flight control and real-time data ingest of photos, videos and sensor readings. We will have Apache NiFi react to live situations facing the drone and have it issue flight commands via UDP. In this initial section, we will control the drone with Python which can be triggered by NiFi. Apache NiFi will ingest log data that is stored as CSV files on a NiFi node connected to the drone's WiFi. This will eventually move to a dedicated embedded device running MiniFi. This is a small personal drone with less than 13 minutes of flight time per battery. This is not a commercial drone, but gives you an idea of the what you can do with drones. Drone Live Communications for Sensor Readings and Drone Control You must connect t

Using Apache NiFi in OpenShift and Anywhere Else to Act as Your Global Integration Gateway

Using Apache NiFi in OpenShift and Anywhere Else to Act as Your Global Integration Gateway What does it look like? Where Can I Run This Magic Engine: Private Cloud, Public Cloud, Hybrid Cloud, VM, Bare Metal, Single Node, Laptop, Raspberry Pi or anywhere you have a 1GB of RAM and some CPU is a good place to run a powerful graphical integration and dataflow engine.   You can also run MiNiFi C++ or Java agents if you want it even smaller. Sounds Too Powerful and Expensive: Apache NiFi is Open Source and can be run freely anywhere. For What Use Cases: Microservices, Images, Deep Learning and Machine Learning Models, Structured Data, Unstructured Data, NLP, Sentiment Analysis, Semistructured Data, Hive, Hadoop, MongoDB, ElasticSearch, SOLR, ETL/ELT, MySQL CDC, MySQL Insert/Update/Delete/Query, Hosting Unlimited REST Services, Interactive with Websockets, Ingesting Any REST API, Natively Converting JSON/XML/CSV/TSV/Logs/Avro/Parquet, Excel, PDF, Word Documents, Syslog, Kafka, JMS, MQTT, TCP

DevOps: Working with Parameter Contexts in Apache NiFi 1.11.4+

 DevOps:  Working with Parameter Contexts in Apache NiFi 1.11.4+ nifi list-param-contexts -u http://localhost:8080 -ot simple #   Id                                     Name             Description     -   ------------------------------------   --------------   -----------     1   3a801ff4-1f73-1836-b59c-b9fbc79ab030   backupregistry                   2   7184b9f4-0171-1000-4627-967e118f3037   health                           3   3a801faf-1f87-1836-54ba-3d913fa223ad   retail                           4   3a801fde-1f73-1836-957b-a9f4d2c9b73d   sensors                         #> nifi export-param-context -u http://localhost:8080 -verbose --paramContextId 3a801faf-1f87-1836-54ba-3d913fa223ad {   "name" : "retail",   "description" : "",   "parameters" : [ {     "parameter" : {       "name" : "allquery",       "description" : "",       "sensitive" : false,       "value"