Skip to main content

SQL Reporting Task for Cloudera Flow Management / HDF / Apache NiFi

SQL Reporting Task for Cloudera Flow Management / HDF / Apache NiFi

Would you like to have reporting tasks gathering metrics and sending them to your database or Kafka from NiFi based on a query of NiFi provenance, bulletins, metrics, processor status or other KPI?

Now you can.   If you are using HDF 3.5.0, this Reporting task NAR is pre installed and ready to go.

Let's add some Reporting tasks that use SQL!!!  QueryNiFiReportingTask.

The first one that was interesting for me was to write queries against provenance for one processor that consumes from a certain topic, I decided to query it every 10 seconds.   My query and some results are below.

So let's go to Controller Settings / Reporting Tasks and then add QueryNiFiReportingTask:

We add one per item we want to monitor.   Then for the reporting task we will need a place to send the records (a sink), we can send it to a JDBC Database (DatabaseRecordSink, KafkaRecordSink, PrometheusRecordSink, ScriptedRecordSink or SiteToSiteReportingRecordSink).   I am going to do Kafka, but Prometheus, Database and S2S are good options.   If you send SiteToSite you can send to another NiFi cluster for processing for that NiFinception processing route,

We have to write an Apache Calcite compliant SQL query, set our sink and decide if we want to include zero record results (false is a good idea).

One option is to query the BULLETINS table, which are NiFi Cluster bulletins (warnings/errors).

Another option is the CONNECTION_STATUS table.

How about NiFi JVM Metrics?  That has some good stuff in there.

Let's configure some Kafka Record Sinks.  I am using Kafka 2.3, so I'll use the Kafka 2 sink.   I set some brokers (default port of 9092), create a new topic for it, chose the record writer.  I chose JSON, but it could be CSV, AVRO, Parquet, XML or something else.   Stick with JSON or AVRO for Kafka.

So data starts getting sent, by default the schedule is every 5 minutes.   This can be adjusted, for provenance I set mine to every 10 seconds.


Let's look at data as it passes through Kafka, in the follow up to this article we'll land in Impala/Kudu/Hive/Druid/Phoenix/HBase or HDFS/S3 and query it.   For now, we can examine the data within Kafka via Cloudera Streams Messaging Manager (SMM).

We can examine any of the topics in Kafka with SMM.   We can then start consuming these records and build live metric handling applications or send to live data marts and dashboards powered by Cloudera Data Platform (CDP).

We have a full set of SQL to use for these reporting tasks including selecting columns, doing aggregates like MAX and AVG, ordering, grouping, where clauses and row limits.

Provenance Query

SELECT eventId, durationMillis, lineageStart, timestampMillis,
updatedAttributes, entitySize, details
WHERE componentName = 'Consume Kafka iot messages'

Provenance Results

[{"eventId":2724,"durationMillis":69,"lineageStart":1586294707989,"timestampMillis":1586294708002,"updatedAttributes":"{path=./, schema.protocol.version=1, filename=be499074-c595-46f5-a03a-482607fb9c8c, schema.identifier=1, kafka.partition=7, kafka.offset=36,, kafka.timestamp=1586294707933, kafka.topic=iot, schema.version=1, mime.type=application/json, uuid=be499074-c595-46f5-a03a-482607fb9c8c}","entitySize":246,"details":null},{"eventId":2736,"durationMillis":20,"lineageStart":1586294708905,"timestampMillis":1586294708916,"updatedAttributes":"{path=./, schema.protocol.version=1, filename=74c9e28d-82a4-4cea-8331-92b7a4bee1b3, schema.identifier=1, kafka.partition=3, kafka.offset=36,, kafka.timestamp=1586294708898, kafka.topic=iot, schema.version=1, mime.type=application/json, uuid=74c9e28d-82a4-4cea-8331-92b7a4bee1b3}","entitySize":247,"details":null},..]

I have classes on using this technology and some webinars coming to you in the East Coast of the US and virtually at my meetup.

Cloudera HDF 3.5.0 Is Now Available For Download

HDF 3.5.0 includes the following components:
  • Apache Ambari 2.7.5
  • Apache Kafka 2.3.1
  • Apache NiFi 1.11.1
  • NiFi Registry 0.5.0
  • Apache Ranger 1.2.0
  • Apache Storm 1.2.1
  • Apache ZooKeeper 3.4.6
  • Apache MiNiFi Java Agent 0.6.0
  • Apache MiNiFi C++ 0.6.0
  • Hortonworks Schema Registry 0.8.1
  • Hortonworks Streaming Analytics Manager 0.6.0
  • Apache Knox 1.0.0
  • SmartSense 1.5.0

SQL reporting task. The QueryNiFiReportingTask allows users to execute SQL queries against tables containing information on Connection Status, Processor Status, Bulletins, Process Group Status, JVM Metrics, Provenance and Connection Status Predictions. In combination with Site to Site, it is particularly useful to define fine-grained monitoring capabilities on top of the running workflows.

Popular posts from this blog

Ingesting Drone Data From DJII Ryze Tello Drones Part 1 - Setup and Practice

Ingesting Drone Data From DJII Ryze Tello Drones Part 1 - Setup and Practice In Part 1, we will setup our drone, our communication environment, capture the data and do initial analysis. We will eventually grab live video stream for object detection, real-time flight control and real-time data ingest of photos, videos and sensor readings. We will have Apache NiFi react to live situations facing the drone and have it issue flight commands via UDP. In this initial section, we will control the drone with Python which can be triggered by NiFi. Apache NiFi will ingest log data that is stored as CSV files on a NiFi node connected to the drone's WiFi. This will eventually move to a dedicated embedded device running MiniFi. This is a small personal drone with less than 13 minutes of flight time per battery. This is not a commercial drone, but gives you an idea of the what you can do with drones. Drone Live Communications for Sensor Readings and Drone Control You must connect to the drone…

Migrating Apache Flume Flows to Apache NiFi: Kafka Source to HDFS / Kudu / File / Hive

Migrating Apache Flume Flows to Apache NiFi: Kafka Source to HDFS / Kudu / File / HiveArticle 7 - Article 6 -
Article 5 - 
Article 4 - Article 3 - Article 2 - Article 1 Source Code:
This is one possible simple, fast replacement for "Flafka".

Consume / Publish Kafka And Store to Files, HDFS, Hive 3.1, Kudu

Consume Kafka Flow 

 Merge Records And Store As AVRO or ORC
Consume Kafka, Update Records via Machine Learning Models In CDSW And Store to Kudu


Exploring Apache NiFi 1.10: Stateless Engine and Parameters

Exploring Apache NiFi 1.10:   Stateless Engine and Parameters Apache NiFi is now available in 1.10!

You can now use JDK 8 or JDK 11!   I am running in JDK 11, seems a bit faster.

A huge feature is the addition of Parameters!   And you can use these to pass parameters to Apache NiFi Stateless!

A few lesser Processors have been moved from the main download, see here for migration hints:

Release Notes:

Example Source Code:

More New Features:

ParquetReader/Writer (See: Reporting Task.   Expect more Prometheus stuff coming.Experimental Encrypted content repository.   People asked me for this one before.Par…