My Year in Review (2020)

My Year in Review (2020)

 


Last year, I thought a few things would be coming in 2020.   

What's Coming in 2020

  • Cloud Enterprise Data Platforms
  • Hybrid Cloud
  • Streaming with Flink, Kafka, NiFi
  • AI at the Edge with Microcontrollers and Small Devices
  • Voice Data In Queries
  • Event Handler as a Service (Automatic Kafka Message Reading)
  • More Powerful Parameter Based Modular Streaming 
  • Cloud First For Big Data
  • Log Handling Moves to MiNiFi
  • Full AI At The Edge with Deployable Models
  • More Powerful Edge TPU/GPU/VPU
  • Kafka is everywhere
  • Open Source UI Driven Event Engines
  • FLaNK Stack gains popularity
  • FLINK Everywhere

Some of this was deferred due to the global pandemic and I had a big miss with Voice Data and some advances in streaming getting delayed.

Here's a list bit of new news for 2020, SRM was added to Kafka DataHub in the Public Cloud:

I also just did a best of 2020 video you can check out:



Articles from 2020

Talks Around the World (Virtual)


June 11, 2020 - INRHYTHM Lightning Talk




August 13, 2020 - Real-Time Analytics



August 28, 2020 - Apache Beam Summit





September 10, 2020 - Future of Data Princeton Meetup - Flink Flank Flink Fest



September 29 - October 1, 2020 - ApacheCon








September 29, 2020 - Apache Streaming Meetup



October 9, 2020 - DevOps Stage




October 22, 2020 - Flink Forward



October 26, 2020 - Open Source Summit Europe


October 27, 2020 - AI Dev World




October 30, 2020 - Nethope Conference with Cloudera Foundation

November 10, 2020 - Variis Talk


November 25, 2020 - Big Data Conference


December 14, 2020 - Apache MXNet Day

https://www.eventbrite.com/e/apache-mxnet-day-dec-14th-900-to-500-pm-pst-tickets-127767842055#

Source Code


Slides From Events and Talks 2020


This Year's Devices

  • NVIDIA Jetson Xavier NX
  • NVIDIA Jetson Nano 2GB
  • Breakout Garden for Raspberry Pi (I2C + SPI)
  • SGP30 Air Quality Sensor Breakout
  • MLX90640 Thermal Camera Breakout – Wide angle (110°)
  • Raspberry Pi 4 - 8G RAM!

What's Coming in 2021

  • AI Driven Streaming
  • Cloudera Data Platform on GCP
  • OpenShift Powered by CDP
  • More Hybrid Cloud
  • SQL Stream Builder for Flink SQL
  • More powerful Edge AI
  • More TPUs, GPUs and specialized chips
  • More Streaming
  • More COVID datasets
  • More Robots
  • More Automation
  • More Serverless
  • Voice Data In Queries
  • K8 Everywhere... 
  • K8 Backlash

References:





Simple Change Data Capture (CDC) with SQL Selects via Apache NiFi (FLaNK)

 Simple Change Data Capture (CDC) with SQL Selects via Apache NiFi (FLaNK)


Sometimes you need real CDC and you have access to transaction change logs and you use a tool like QLIK REPLICATE or GoldenGate to pump out records to Kafka and then Flink SQL or NiFi can read them and process them.

Other times you need something easier for just some basic changes and inserts to some tables you are interested in receiving new data as events.   Apache NiFi can do this easily for you with QueryDatabaseTableRecord, you don't need to know anything but the database connection information, table name and what field may change.  NiFi will query, watch state and give you new records.   Nothing is hardcoded, parameterize those values and you have a generic Any RDBMS to Any Other Store data pipeline.   We are reading as records which means each FlowFile in NiFi can have thousands of records that we know all the fields, types and schema related information for.   This can be ones that NiFi infers the schema or ones we use from a Schema Registry like Cloudera's amazing Open Source Schema Registry.

Let's see what data is in our Postgresql table?

How to 

  1. QueryDatabaseTableRecord (we will output Json records, but could have done Parquet, XML, CSV or AVRO)
  2. UpdateAttribute - optional - set a table and schema name, can do with parameters as well.
  3. MergeRecord - optional - let's batch these up.
  4. PutORC - let's send these records to HDFS (which could be on bare metal disks, GCS, S3, Azure or ADLS).   This will build us an external hive table.




PutORC


As you can see we are looking at the "prices" table and checking maximum values to increment on the updated_on date and the item_id sequential key.  We then output JSON records.




We could then:

Add-Ons Examples

  1. PutKudu
  2. PutHDFS (send as JSON, CSV, Parquet) and build an Impala or Hive table on top as external
  3. PutHive3Streaming (Hive 3 ACID Tables)
  4. PutS3
  5. PutAzureDataLakeStorage
  6. PutHBaseRecord
  7. PublishKafkaRecord_2_* - send a copy to Kafka for Flink SQL, Spark Streaming, Spring, etc...
  8. PutBigQueryStreaming (Google)
  9. PutCassandraRecord
  10. PutDatabaseRecord - let's send to another JDBC Datastore
  11. PutDruidRecord - Druid is a cool datastore, check it out on CDP Public Cloud
  12. PutElasticSearchRecord
  13. PutMongoRecord
  14. PutSolrRecord
  15. PutRecord (to many RecordSinkServices like Databases, Kafka, Prometheus, Scripted and Site-to-Site)
  16. PutParquet (store to HDFS as Parquet files)
You can do any number or all of these or multiple copies of each to other clouds or clusters.    You can also enrichment, transformation, alerts, queries or routing.

These records can be also manipulated ETL/ELT style with Record processing in stream with options such as:

  1. QueryRecord (use Calcite ANSI SQL to query and transform records and can also change output type)
  2. JoltTransformRecord (use JOLT against any record not just JSON)
  3. LookupRecord (to match against Lookup services like caches, Kudu, REST services, ML models, HBase and more)
  4. PartitionRecord (to break up into like groups)
  5. SplitRecord (to break up record groups into records)
  6. UpdateRecord (update values in fields, often paired with LookupRecord)
  7. ValidateRecord (check against a schema and check for extra fields)
  8. GeoEnrichIPRecord
  9.  ConvertRecord (change between types like JSON to CSV)  

When you use PutORC, it will give you the details on building your external table.   You can do a PutHiveQL to auto-build this table, but most companies want this done by a DBA.

CREATE EXTERNAL TABLE IF NOT EXISTS `pricesorc` (`item_id` BIGINT, `price` DOUBLE, `created_on` BIGINT, `updated_on` BIGINT)
STORED AS ORC
LOCATION '
/user/tspann/prices'


Part 2

REST to Database

Let's reverse this now.   Sometimes you want to take data, say from a REST service and store it to a JDBC datastore.

  1. InvokeHTTP (read from a REST endpoint)
  2. PutDatabaseRecord (put JSON to our JDBC store).
That's it to store data to a database.  We could add some of the ETL/ELT enrichments mentioned above 
or others that manipulate content.




REST Output



Database Connection Pool

Get the REST Data

PutDatabaseRecord



From ApacheCon 2020, John Kuchmek does a great talk on Incrementally Streaming RDBMS Data.



Resources



That's it.  Happy Holidays!

Ingesting Websocket Data for Live Stock Streams with Cloudera Flow Management Powered by Apache NiFi

Ingesting Websocket Data for Live Stock Streams with Cloudera Flow Management Powered by Apache NiFi

The stocks I follow have a lot of trades and changes throughout the day, I would like to capture all of this data and make it available to my colleagues.   I will push it to Kafka and make it available via a topic and I may also push it to Slack or Dischord or a webpage or dashboard or Cloudera Visual App dashboard.   We'll see what people request.

We will read websockets from wss://ws.finnhub.io?token=YOURTOKEN.   You will need to sign up for a finnhub.io account to get this data.   The API is well documented and very easy to use with Apache NiFi.

As updates happen we receive websocket calls and send them to Kafka for use in Flink SQL, Kafka Connect, Spark Streaming, Kafka Streams, Python, .Java Spring Boot Apps, NET Apps and NIFi.

Definition of Fields

s

Symbol.

p

Last price.

t

UNIX milliseconds timestamp.

v

Volume.

c

List of trade conditions. A comprehensive list of trade conditions code can be found here


Incoming Websocket Text Message Processing



We parse out the fields we want, then rename them for something readable.   Then we build a new JSON field that matches our trades schema then we push to Kafka.


First step we need to setup a controller pool to connect to finnhub's web socket API.


We can see data in flight via NiFi Provenance.




The detailed steps and settings for converting raw websocket text messages to final messages to send to Kafka.













Raw Data From Websockets Text Message

Formatted JSON Data Before Converting and Sending to Kafka Topic (trades)


We can view the final clean data in Kafka via Cloudera Streams Messaging Manager (SMM)


Schema

https://github.com/tspannhw/ApacheConAtHome2020/blob/main/schemas/trades.avsc


Happy Holidays from Tim and the Streaming Felines!





Reference

Smart Stocks with FLaNK (NiFi, Kafka, Flink SQL)

Smart Stocks with FLaNK (NiFi, Kafka, Flink SQL)




I would like to track stocks from IBM and Cloudera frequently during the day using Apache NiFi to read the REST API.   After that I have some Streaming Analytics to perform with Apache Flink SQL and I also want permanent fast storage in Apache Kudu queried with Apache Impala.

Let's build that application cloud native in seconds in AWS or Azure.


To Script Loading Schemas, Tables, Alerts see scripts/setup.sh:

  • Kafka Topic
  • Kafka Schema
  • Kudu Table
  • Flink Prep
  • Flink SQL Client Run
  • Flink SQL Client Configuration
Once our automated admin has built our cloud environment and populated it with the goodness of our app, we can being out continuous sql.


If you know your data, build a schema, share to the registry







One unique thing we added was a default value in our Avro schema and making it a logicalType for timestamp-millis.  This is helpful for Flink SQL timestamp related queries.

{ "name" : "dt", "type" : ["long"], "default": 1, "logicalType": "timestamp-millis"}

You can see the entire schema here:

https://raw.githubusercontent.com/tspannhw/SmartStocks/main/stocks.avsc

We will also want a topic for Stock Alerts that we will create later with Flink SQL, so let's define a schema for that as well.

 https://raw.githubusercontent.com/tspannhw/SmartStocks/main/stockalerts.avsc

For our data today we will use AVRO data with AVRO schemas for use inside Kafka topics and whoever will consume it.


How to Build a Smart Stock DataFlow in X Easy Steps








  1. Retrieve data from source (example:   InvokeHTTP against SSL REST Feed - say TwelveData) with a schedule.
  2. Set a Schema Name (UpdateAttribute)
  3. ForkRecord:  We use this to split out records from the header (/values) using RecordPath syntax.
  4. QueryRecord:  Convert type and manipulate data with SQL.   We aren't doing anything in this one, but this is an option to change fields, add fields, etc...
  5. UpdateRecord:  This first one I am setting some fields in the record from attributes and adding a current timestamp.   I also reformate by timestamp for conversion.
  6. UpdateRecord:   I am making dt make numeric UNIX timestamp.
  7. UpdateRecord:  I am making datetime my formatted String date time.
  8. (LookupRecord):  I don't have this step yet as I don't have an internal record for this company in my Real-Time Data Mart.  https://docs.cloudera.com/runtime/7.0.3/kudu-overview/topics/kudu-architecture-cdp.html.  I will probably add this step to augment or check my data.
  9. (ValidateRecord):   For less reliable data sources, I may want to validate my data against our schema, otherwise we will get warnings or errors.
  10. PublishKafkaRecord_2_0:   Convert from JSON to AVRO, send to our Kafka topic with headers including reference to the correct schema stocks and it's version 1.0.












Now that we are streaming our data to Kafka topics, we can utilize it in Flink SQL Continuous SQL applications, NiFi applications, Spark 3 applications and more.   So in this case CFM NiFi is our Producer and we will have CFM NiFi and CSA Flink SQL as Kafka Consumers.

We can see what our data looks like in the new cleaned up format with all the fields we need.



Viewing, Monitoring, Checking and Alerting On Our Streaming Data in Kafka

Cloudera Streams Messaging Manager solves all of these difficult problems from one easy to use pre integrated UI.   It is pre-wired into my Kafka Datahubs and secured with SDX.




I can see my AVRO data with associated stocks schema is in the topic, ready to be consumed.  I can then monitor who is consuming, how much and if there is a lag or latency.

How to Store Our Streaming Data to Our Real-Time DataMart in the Cloud


Consume stocks AVRO data with stocks schema then write to our Real-Time Data Mart in Cloudera Data Platform powered by Apache Impala and Apache Kudu.   If something failed or could not connect, let's retry three times.


We use a parameter for our 3+ Kafka brokers with port.   We could also have parameters for topic names and consumer name.   We read from stocks table which uses stocks schema that is referenced in Kafka header automatically ready by NiFi.  When we sent message to Kafka, nifi passed on our schema name via schema.name attribute in NiFi.   As we can see it was schema attached Avro, so we use that Reader and convert to simple JSON with that schema.



Writing to our Cloud Native Real-Time Data Mart could not be simpler, we reference the table stocks we have created and have permissions to and use the JSON reader.   I like UPSERT since it handles INSERT AND UPDATE.

First we need to create our Kudu table in either Apache Hue from CDP or from the command line scripted.   Example:  impala-shell -i edge2ai-1.dim.local -d default -f  /opt/demo/sql/kudu.sql 
CREATE TABLE stocks
(
  uuid STRING,
  `datetime` STRING,
  `symbol` STRING, 
  `open` STRING, 
  `close` STRING,
  `high` STRING,
  `volume` STRING,
  `ts` TIMESTAMP,
  `dt` TIMESTAMP,
  `low` STRING,
PRIMARY KEY (uuid,`datetime`) ) 
PARTITION BY HASH PARTITIONS 4 
STORED AS KUDU TBLPROPERTIES ('kudu.num_tablet_replicas' = '1');




Using Apache Hue integrated in CDP, I can examine my Real-Time Data Mart table and then query my table.


My data is now ready for reports, dashboards, applications, notebooks, web applications, mobile apps and machine learning.


I can now spin up a Cloudera Visual Application on this table in a few seconds.




Now we can build our streaming analytics application in Flink.

How to Build a Smart Stock Streaming Analytics in X Easy Steps

I can connect to Flink SQL from the command line Flink SQL Client to start exploring my Kafka and Kudu data, create temporary tables and launch some applications (insert statements). The environment lets me see all the different catalogs available including registry (Cloudera Cloud Schema Registry), hive (Cloud Native Database table) and kudu (Cloudera Real-Time Cloud Data Mart) tables.











Run Flink SQL Client

It's a two step process, first setup a yarn session.   You may need to add your Kerberos credentials.

flink-yarn-session -tm 2048 -s 2 -d

Then launch the command line SQL Client.

flink-sql-client embedded -e sql-env.yaml

 


Run Flink SQL

Cross Catalog Query to Stocks Kafka Topic

select * from registry.default_database.stocks;

Cross Catalog Query to Stocks Kudu/Impala Table

select * from kudu.default_database.impala::default.stocks;

Default Catalog

use catalog default_catalog;

CREATE TABLE stockEvents ( symbol STRING, uuid STRING, ts BIGINT, dt BIGINT, datetime STRING, open STRING, close STRING, high STRING, volume STRING, low STRING, 
event_time AS CAST(from_unixtime(floor(ts/1000)) AS TIMESTAMP(3)), 
WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND ) 
WITH ( 'connector.type' = 'kafka', 'connector.version' = 'universal', 
'connector.topic' = 'stocks', 
'connector.startup-mode' = 'earliest-offset', 
'connector.properties.bootstrap.servers' = 'edge2ai-1.dim.local:9092', 
'format.type' = 'registry', 
'format.registry.properties.schema.registry.url' = 'http://edge2ai-1.dim.local:7788/api/v1' );

show tables;

Flink SQL> describe stockEvents; 

root |-- symbol: STRING |-- uuid: STRING |-- ts: BIGINT |-- dt: BIGINT |-- datetime: STRING |-- open: STRING |-- close: STRING |-- high: STRING |-- volume: STRING |-- low: STRING |-- event_time: TIMESTAMP(3) AS CAST(FROM_UNIXTIME(FLOOR(ts / 1000)) AS TIMESTAMP(3)) |-- WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND

We added a watermark and event time pulled from our timestamp.

Simple Select All Query

select * from default_catalog.default_database.stockEvents;

We can do some interesting queries against this table we created.

Tumbling Window

SELECT symbol, 
TUMBLE_START(event_time, INTERVAL '1' MINUTE) as tumbleStart, 
TUMBLE_END(event_time, INTERVAL '1' MINUTE) as tumbleEnd, 
AVG(CAST(high as DOUBLE)) as avgHigh 
FROM stockEvents 
WHERE symbol is not null 
GROUP BY TUMBLE(event_time, INTERVAL '1' MINUTE), symbol;

Top 3

SELECT * 
FROM 
( SELECT * , ROW_NUMBER() OVER 
( PARTITION BY window_start ORDER BY num_stocks desc ) AS rownum 
FROM ( 
SELECT TUMBLE_START(event_time, INTERVAL '10' MINUTE) AS window_start, 
symbol, 
COUNT(*) AS num_stocks 
FROM stockEvents 
GROUP BY symbol, 
TUMBLE(event_time, INTERVAL '10' MINUTE) ) ) 
WHERE rownum <=3;

Stock Alerts

INSERT INTO stockalerts 
/*+ OPTIONS('sink.partitioner'='round-robin') */ 
SELECT CAST(symbol as STRING) symbol, 
CAST(uuid as STRING) uuid, ts, dt, open, close, high, volume, low, 
datetime, 'new-high' message, 'nh' alertcode, CAST(CURRENT_TIMESTAMP AS BIGINT) alerttime FROM stocks st 
WHERE symbol is not null 
AND symbol <> 'null' 
AND trim(symbol) <> '' 
AND CAST(close as DOUBLE) > 11;


Monitoring Flink Jobs

Using the CSA Flink Global Dashboard, I can see all my Flink jobs runninging including SQL Client jobs, disconnected Flink SQL inserts and deployed Flink applications.











We can also see the data populated in the stockalerts topic.  We can run a Flink SQL, Spark 3, NiFi or other applications against this data to handle alerts.   That may be the next application, I may send those alerts to iphone messages, Slack messages, a database table and a websockets app.



Data Lineage and Governance

We all know that NiFi has deep data lineage that can be pushed or pulled via REST, Reporting Tasks or CLI to use in audits, metrics and tracking.   If I want to all the governance data for my entire streaming pipeline I will use Apache Atlas that is prewired as part of SDX in my Cloud Data Platform.



References