Showing posts with label apache-kafka. Show all posts
Showing posts with label apache-kafka. Show all posts

FLANK Stack: NiFi Processor for Kafka Consumption on Demand - REST Proxy Example

Writing a Custom Kafka Rest Proxy in 4 Hours


A custom processor for using NiFi as a REST Proxy to Kafka is very easy.  So I made one in NiFi 1.10.   It's a simple Kafka smart client that accepts POSTs, GETs or whatever HTTP request and returns a message from a Kafka topic, topic can be set via variables, HTTP request or your choice.   To get on of the query parameters, you do so like this:   ${http.query.param.topic}.    I am using the plain old KafkaConsumer class. 



If you need data, just CURL it or use your HTTP/REST client controls in Java, Python, Go, Scala, Ruby, C#, VB.NET or whatever.


curl http://localhost:9089?topic=bme680

Download a pre-built NAR and install.   Note:   this is a pre-release Alpha that I quickly built and tested on a few clusters.   This is not an official project or product.   This is a POC for myself to see how hard it could it be.   It's not!   Roll your own or join me in building out an open source project for one.   What requirements do you have?   This worked for me, send a curl, get a message.



Build a topic for Kafka with SMM in seconds


 Here's an entire Kafka REST Proxy in a few steps.

HandleHttpRequest (We could have hundreds of options here.
RouteOnAttribute


Provenance For An Example Kafka REST Call

To use the processor, we need to set some variables for Kafka Broker, Topic, Offset Reset, # of Records to grab at a time, Client Id, Group Id - important for keeping your offset, auto commit, deserializer for your key and value types - String is usually good, maybe Byte.


Kafka to HBase is Easy.

Kafka to Kudu is Easy.




Kafka Proxy Processor For Message Consumption Source


https://github.com/tspannhw/kafkarest-processor

Pre-Built NAR

https://github.com/tspannhw/kafkarest-processor/releases/download/0.1/nifi-kafkarest-nar-1.0.nar

Other Kafka Articles



It's so easy, didn't wake the cat.




Year and Decade End Report : 201*

A Year in Big Data 2019

This has been an amazing year for Big Data, Streaming, Tech, The Cloud, AI, IoT and everything in between.   I got to witness the merger of the two Big Data giants into one unstoppable cloud data machine from the inside.   The sum of the new Cloudera is far greater than just Hortonworks + Cloudera.   It has been a great year working with amazing engineers, leaders, clients, partners, prospects, community members, data scientists, analysts, marketing mavens and everyone I have gotten to see this year.   It's been busy traveling the globe spreading the good word and solving some tough problems.   

In 2019, Edge2AI became something we could teach and implement in a single day to newbies.   The combination of MiNiFi + NiFi + Kafka + KuDu + Cloud is unstoppable.  Once we added Flink later this year, the FLaNK stack became amazing.   I see amazing stuff for this in the 20's.     I got to use new projects like Kudu (awesome), Impala, Cloudera Manager and new tools from the Data in Motion team.   Streams Messaging Manager became the best way to manage, monitor, create, alert on and use Kafka across clusters anywhere.   This is now my favorite way to demo anything.   So much transparency, awesome.   Having the power of Apache Flink is just making any problem solve-able, even those that scale to thousands of nodes.   Running just one node of Flink has been awesome.   I am a Squirrel Dev now!

Strata, DataWorksSummit and NoSQL Day were awesome, but working with charities and non-profits solving real world problems was amazing.     Helping at Nethope is the highlight of my professional year.   I am so thankful to the Cloudera Foundation for having me help.   I am really impressed with the Cloudera Foundation, Nethope and everyone involved.  I am hoping to speak to a few different conferences in 2020, but we'll see where Edge2AI takes me.

There's a lot to wrap up for 2019, so I attempted to put most of it following this break.


Migrating Apache Flume Flows to Apache NiFi: Any Relational Database To/From Anywhere

Migrating Apache Flume Flows to Apache NiFi:  Any Relational Database To/From Anywhere



This is a simple use case of being a gateway between Relational Databases and other sources and sinks.   We can do a lot more than that in NiFi.  We can SELECT, UPDATE, INSERT, DELETE and run any DML.  All with No Code.   We can also access metadata from an RDBMS and build dynamical ELT systems from that.

It is extremely easy to do this in NiFi.


Instead of using Flume, Let's Use Apache NiFi to Move Any Tabular Data To and From Databases




From A Relational Database (via JDBC Driver) to Anywhere.   In our case, we will pull from an RDBMS and post to Kudu.


Step 1:  QueryDatabaseTableRecord (Create Connection Pool, Pick DB Type, Table Name, Record Writer)
Step 2:  PutKudu (Set Kudu Master, Table Name, 
Done!

Query Database


Connect to Kudu




Let's Write JSON Records That Get Converted to Kudu Records or RDBMS/MySQL/JDBC Records 



Schema For The Data




Read All The Records From Our JDBC Database





Let's Create an Apache Kudu table to Put Database Records To





Let's Examine the MySQL Table We Want to Read/Write To and From





Let's Check the MariaDB Table



MySQL Table Information






From Anywhere (Say a Device) to A Relational Database (via JDBC Driver).   In our case, we will insert into an RDBMS from Kafka.


Step 1:  Acquire or modify data say ConsumeKafkaRecord_2
Step 2:  PutDatabaseRecord (Set Record Reader, INSERT or UPDATE, Connection Pool, Table Name)
Done!


Put Database Records in Any JDBC/RDBMS



Setup Your Connection Pool to SELECT, UPDATE, INSERT or DELETE





SQL DDL

Create MariaDB/MySQL Table


CREATE TABLE iot ( uuid VARCHAR(255) NOT NULL PRIMARY KEY,
 ipaddress VARCHAR(255),top1pct BIGINT, top1 VARCHAR(255),
cputemp VARCHAR(255), gputemp VARCHAR(255),
 gputempf VARCHAR(255),
cputempf varchar(255), runtime VARCHAR(255),
host VARCHAR(255), filename VARCHAR(255),
 imageinput VARCHAR(255),hostname varchar(255),
macaddress varchar(255), end VARCHAR(255), te VARCHAR(255), systemtime VARCHAR(255),

cpu BIGINT, diskusage VARCHAR(255), memory BIGINT, id VARCHAR(255));

Create Kudu Table


CREATE TABLE iot ( uuid STRING,
 ipaddress STRING,top1pct BIGINT, 
 top1 STRING,
cputemp STRING, 
gputemp STRING,
 gputempf STRING,
cputempf STRING, runtime STRING,
host STRING, filename STRING,
 imageinput STRING,hostname STRING,
macaddress STRING, 
`end` STRING, te STRING, systemtime STRING,
cpu BIGINT, diskusage STRING, 
memory BIGINT, 
id STRING,
PRIMARY KEY (uuid)
)
PARTITION BY HASH PARTITIONS 16 
STORED AS KUDU 

TBLPROPERTIES ('kudu.num_tablet_replicas' = '1');

References

Using GrovePi with Raspberry Pi and MiNiFi Agents for Data Ingest to Parquet, Kudu, ORC, Kafka, Hive and Impala

Using GrovePi with Raspberry Pi and MiNiFi Agents for Data Ingest


Source Code:  https://github.com/tspannhw/minifi-grove-sensors

Acquiring sensor data from Grove sensors is easy using a GrovePi Hat and some compatible sensors.


Just before my talk at the Future of Data Meetup @ Bell Works in Holmdel, NJ, I thought I should ingest some data from a grove sensor interface.

It's so easy a sleeping cat could do it.




So what does this device look like?  



I have a temperature and humidity sensor on there.




The distance sonic sensor is in there too, that's for the next article.




Let's do this with minimal RAM.




That's a 64GB hard drive underneath in the white case with the RPI.





I need more data and BACON.



We design our MiNiFi Agent Flow in CEM/EFM.   Grab JSON data stream and run sensors.


Apache NiFi 1.9.2 / CFM 1.0 Received HTTPS S2S Events From MiNiFi Agent




A simple flow to query and convert our JSON data, then store it to Kudu and HDFS (ORC) as well as push it to Kafka with a schema.




Let's read that Kafka message and store to Parquet, we will push to MQTT and JMS in the next article.   This is our universal proxy/gateway.



We could infer a schema and not save it.   But by saving a schema to the schema registry it makes SMM, Kafka, NiFi and others schema aware and easy to automagically query and convert between CSV/JSON/XML/AVRO/Parquet and more.

Let's store the data in Parquet files on HDFS with an Impala table.   In Apache NiFi 1.10 there is a ParquetWriter



Before we push to Kafka, let's create a topic for it with Cloudera SMM



Let's build an impala table for that Kudu data.



We can query our tables with ease as data rapidly is added.





Let's Examine the Parquet Files that NiFi Generated





 Let's query that parquet data with Impala in Hue



 Let's monitor that data in Kafka with Cloudera SMM






That was easy from device to enterprise cloud data store(s) with enterprise messages, security, governance, lineage, data catalog, SDX, monitoring and more.   How easy can you ingest IoT data, query it mid stream and store it in multiple data stores.   It took longer to write the article then to do the project and code.   All graphical, Single Sign On, multiple schemas/verisons/data types/engines, multiple OSs, edge, cloud and laptop.   Easy.

Table DDL


CREATE EXTERNAL TABLE IF NOT EXISTS grovesensors2 
(humidity STRING, uuid STRING, systemtime STRING, runtime STRING, cpu DOUBLE, id STRING, te STRING, host STRING, `end` STRING, 
macaddress STRING, temperature STRING, diskusage STRING, memory DOUBLE, ipaddress STRING, host_name STRING) 
STORED AS ORC
LOCATION '/tmp/grovesensors'

CREATE TABLE grovesensors ( uuid STRING,  `end` STRING,humidity STRING, systemtime STRING, runtime STRING, cpu DOUBLE, id STRING, te STRING, 
host STRING,
macaddress STRING, temperature STRING, diskusage STRING, memory DOUBLE, ipaddress STRING, host_name STRING,
PRIMARY KEY (uuid, `end`)
)
PARTITION BY HASH PARTITIONS 16
STORED AS KUDU
TBLPROPERTIES ('kudu.num_tablet_replicas' = '1')

hdfs dfs -mkdir -p /tmp/grovesensors
hdfs dfs -mkdir -p /tmp/groveparquet

CREATE  EXTERNAL TABLE grove_parquet 
 (
 diskusage STRING, 
  memory DOUBLE,  host_name STRING,
  systemtime STRING,
  macaddress STRING,
  temperature STRING,
  humidity STRING,
  cpu DOUBLE,
  uuid STRING,  ipaddress STRING,
  host STRING,
  `end` STRING,  te STRING,
  runtime STRING,
  id STRING
)
STORED AS PARQUET
LOCATION '/tmp/groveparquet/'

Parquet Format



message org.apache.nifi.grove {
  optional binary diskusage (STRING);
  optional double memory;
  optional binary host_name (STRING);
  optional binary systemtime (STRING);
  optional binary macaddress (STRING);
  optional binary temperature (STRING);
  optional binary humidity (STRING);
  optional double cpu;
  optional binary uuid (STRING);
  optional binary ipaddress (STRING);
  optional binary host (STRING);
  optional binary end (STRING);
  optional binary te (STRING);
  optional binary runtime (STRING);
  optional binary id (STRING);
}

References