Showing posts with label cdf. Show all posts
Showing posts with label cdf. Show all posts

Deleting Schemas From Cloudera Schema Registry

 Deleting Schemas From Cloudera Schema Registry

It is very easy to delete schemas from Cloudera Schema Registry if you need to do so.   I recommend downloading them and having a backup first.

Let's look at our schema

Well let's get rid of that junk.

Here is the documentation For CDF Datahub in CDP Public Cloud


curl -X DELETE "http://MYSERVERHASACOOLNAME.DEV:7788/api/v1/schemaregistry/schemas/junk" -H "accept: application/json"

Where junk is the name of my schema.

You could call this REST API from NiFi, a DevOps tool or just a simple CURL like listed above.

Knox and other security may apply.

Using Cloudera Data Platform with Flow Management and Streams on Azure

Using Cloudera Data Platform with Flow Management and Streams on Azure

Today I am going to be walking you through using Cloudera Data Platform (CDP) with Flow Management and Streams on Azure Cloud.  To see a streaming demo video, please join my webinar (or see it on demand) at Streaming Data Pipelines with CDF in Azure.  I'll share some additional how-to videos on using Apache NiFi and Apache Kafka in Azure very soon.   

Apache NiFi on Azure CDP Data Hub
Sensors to ADLS/HDFS and Kafka

In the above process group we are using QueryRecord to segment JSON records and only pick ones where the Temperature in Fahrenheit is over 80 degrees then we pick out a few attributes to display from the record and send them to a slack channel.

To become a Kafka Producer you set a Record Reader for the type coming in, this is JSON in my case and then set a Record Writer for the type to send to the sensors topic.    In this case we kept it as JSON, but we could convert to AVRO.   I usually do that if I am going to be reading it with Cloudera Kafka Connect.

Our security is automagic and requires little for you to do in NiFi.   I put in my username and password from CDP.   The SSL context is setup for my when I create my datahub.

When I am writing to our Real-Time Data Mart (Apache Kudu), I enter my Kudu servers that I copied from the Kudu Data Mart Hardware page, put in my table name and your login info.   I recommend UPSERT and use your Record Reader JSON.

For real use cases, you will need to spin up:

Public Cloud Data Hubs:
  • Streams Messaging Heavy Duty for AWS
  • Streams Messaging Heavy Duty for Azure
  • Flow Management Heavy Duty for AWS
  • Flow Management Heavy Duty for Azure
  • Apache Kafka 2.4.1
  • Cloudera Schema Registry 0.8.1
  • Cloudera Streams Messaging Manager 2.1.0
  • Apache NiFi 1.11.4
  • Apache NiFi Registry 0.5.0
Demo Source Code:

Let's configure out Data Hubs in CDP in an Azure Environment.   It is a few clicks and some naming and then it builds.

Under the Azure Portal

In Azure, we can examine the files we uploaded to the Azure object store.

Under the Data Lake SDX

NiFi and Kafka are autoconfigured to work with Apache Atlas under our environments Data Lake SDX.  We can browse through the lineage for all the Kafka topics we use.

We can also see the flow for NiFi, HDFS and Kudu.


We can examine all of our Kafka infrastructure from Kafka Brokers, Topics, Consumers, Producers, Latency and Messages.  We can also create and update topics.

Cloudera Manager

We still have access to all of our traditional items like Cloudera Manager to manage configuration of servers.

Under Real-Time Data Mart

We can view tables, create tables and query our table.   Apache Hue is a great tool for accessing data in my Real-Time Data Mart in a datahub.

We can also look at table details in the Impala UI.

©2020 Timothy Spann

No More Spaghetti Flows

Spaghetti Flows

You may have heard of:   For Apache NiFi, I have seen some (and have done some of them in the past), I call them Spaghetti Flows.

Let's avoid them.   When you are first building a flow it often meanders and has lots of extra steps and extra UpdateAttributes and random routes. This applies if you are running on-premise, in CDP or in other stateful NiFi clusters (or single nodes). The following video from Mark Payne is a must watch before you write any NiFi flows.

Apache NiFi Anti-Patterns with Mark Payne 

Do Not:

  • Do not Put 1,000 Flows on one workspace.

  • If your flow has hundreds of steps, this is a Flow Smell.   Investigate why.

  • Do not Use ExecuteProcess, ExecuteScripts or a lot of Groovy scripts as a default, look for existing processors

  • Do not Use Random Custom Processors you find that have no documentation or are unknown.

  • Do not forget to upgrade, if you are running anything before Apache NiFi 1.10, upgrade now!

  • Do not run on default 512M RAM.

  • Do not run one node and think you have a highly available cluster.

  • Do not split a file with millions of records to individual records in one shot without checking available space/memory and back pressure.

  • Use Split processors only as an absolute last resort. Many processors are designed to work on FlowFiles that contain many records or many lines of text. Keeping the FlowFiles together instead of splitting them apart can often yield performance that is improved by 1-2 orders of magnitude.


  • Reduce, Reuse, Recycle.    Use Parameters to reuse common modules.

  • Put flows, reusable chunks (write to Slack, Database, Kafka) into separate Process Groups.

  • Write custom processors if you need new or specialized features

  • Use Cloudera supported NiFi Processors

  • Use RecordProcessors everywhere

  • Read the Docs!

  • Use the NiFi Registry for version control.

  • Use NiFi CLI and DevOps for Migrations.

  • Run a CDP NiFi Datahub or CFM managed 3 or more node cluster.

  • Walk through your flow and make sure you understand every step and it’s easy to read and follow.   Is every processor used?   Are there dead ends?

  • Do run Zookeeper on different nodes from Apache NiFi.

  • For Cloud Hosted Apache NiFi - go with the "high cpu" instances, such as 8 cores, 7 GB ram.

  • same flow 'templatized' and deployed many many times with different params in the same instance

  • Use routing based on content and attributes to allow one flow to handle multiple nearly identical flows is better than deploying the same flow many times with tweaks to parameters in same cluster.

  • Use the correct driver for your database.   There's usually a couple different JDBC drivers.

  • Make sure you match your Hive version to the NiFi processor for it.   There are ones out there for Hive 1 and Hive 3!   HiveStreaming needs Hive3 with ACID, ORC.

Let's revisit some Best Practices:

Get your Apache NiFi for Dummies.   My own NiFi 101.

Here are a few things you should have read and tried before building your first Apache NiFi flow:

Also when in doubt, use Records!  Use Record Processors and use pre-defined schemas, this will be easier to develop, cleaner and more performant. Easier, Faster, Better!!!

There are record processors for Logs (Grok), JSON, AVRO, XML, CSV, Parquet and more.

Look for a processor that has “Record” in the name like PutDatabaseRecord or QueryRecord.

Use the best DevOps processes, testing and tools.

Some newer features in 1.8, 1.9, 1.10, 1.11 that you need to use.

Advanced Articles:

Spaghetti is for eating, not for real-time data streams.   Let's keep it that way.

If you are not sure what to do check out the Cloudera Community, NiFi Slack or the NiFi docs.   Also I may have a helpful article here. Join me and my NiFi friends at virtual meetups for more in-depth NiFi, Flink, Kafka and more. We keep it interactive so you can feel free to ask questions.

Note:   In this picture I am in Italy doing spaghetti research.

Streaming Data with Cloudera Data Flow (CDF) Into Public Cloud (CDP)

Streaming Data with Cloudera Data Flow (CDF) Into Public Cloud (CDP)

At Cloudera Now NYC, I showed a demo on streaming data from MQTT Sensors and Twitter that was running in AWS.   Today I am going to walk you through some of the details and give you the tools to build your own streaming demos in CDP Public Cloud.   If you missed that event, you can watch a recording here.

Let's get streaming!

Let's login, I use Okta for Single-Sign On (SSO) which makes this so easy.  Cloudera Flow Management (CFM) Apache NiFi is officially available in the CDP Public Cloud.   So get started here.   We will be following the guide (   We are running CDF DataHub on CDP 7.1.0.

There's a lot of data engineering and streaming tasks I can accomplish with few clicks.   I can bring up a virtual datawarehouse and use tools like Apache Hue and Data Analytics Studio to examine database and tables and run queries.

We go to Data Hub Clusters and can see the latest Apache NiFi in there.   You can see we have Data Engineering, Kafka and NiFi clusters already built and ready to go.   It only takes a click, a few drop down settings and a name to build and deploy in minutes.   This saves me and my team so much time.   Thanks Cloud Team!

Kafka and NiFi Data Hub Clusters

Provision a New Data Hub - Op Db

Provision a New Data Hub - NiFi

Once build, the Kafka Data Hub is our launching place for Cloudera Manager, Schema Registry and SMM.

Provision a New Data Hub - Real-Time Data Mart

Data Engineering on AWS Details

Display Environments For Your Clouds

From the DataHub cluster that we built for CFM - Apache NiFi or for Apache Kafka I can access Cloudera Manager to do monitoring, management and other tasks that Cloudera administrators are use to like searching logs.

Let's jump into the Apache NiFi UI from CDP Data Hub.

Once I've logged into Flow Management, I can as an administrator see some of the server monitoring, metrics and administrative features and areas.

Our module for Twitter ingest on CDP Data Hub.

We can download our flow immediately and quickly sent our code to version control.

We consume MQTT messages sent from my IoT gateway that is pushing messages from multiple devices via MQTT.

Using parameters that can be set via DevOps or via Apache NiFi, we setup a reusable component to read from any MQTT broker.   Username, password, broker uri and topic are parameters that we set and can change based on any use needed.

Ingesting from Twitter is just as easy as reading from MQTT.

We can also parameterize our Twitter ingest for easy reuse.  For this twitter ingest, we have some sensitive values that are protected as well as some query terms for twitter to limit our data to airline data.

Editing parameters from the NiFi UI is super easy.

All the data passing through the nodes of my cluster.

Apache NiFi has a ton of tabs for visualizing any of the metrics of interest.

We are setting a JSON reader for inferring any JSON data.

To write to Kafka, we have some parameters for brokers and a reader/writer for records.  We use the prebuilt "Default NiFi SSL Context Service" for SSL security.   We also need to specify:  SASL_SSL, PLAIN, your username for CDP, your password for CDP.

On a local edge server, We are publishing sensor data to MQTT.

PutHDFS Configuration (Store GZIPPED JSON Files)

Put To Hive Streaming Table

PutORC Files to CNOW3, autoconverted JSON to ORC

When we push to PutORC it will build us the DDL for an external table automatically, just grab it from data provenance.

For storing to Apache Hive 3 tables, we have to set some parameters for Hive Configuration and the metastore from our data store.

In Apache NiFi, Ranger controls policies for permissions to NiFi.  CDP creates one for NiFi Administrators which I am am member.

Version Control is preconfigured for CDP Data Hub NiFi users with the same single sign on.   Apache NiFi Registry will have all our modules and their versions.

Before we push NiFi changes to version control, you get a list of changes you made.

We can see data as it travels through Apache NiFi in it's built-in data provenance (lineage).

Let's check out our new data in Amazon S3.

We want to look at our data in Kafka, so the we can use Cloudera Streams Messaging Manager (SMM) to view, edit, monitor and manage everything Kafka.

We can build alerts for any piece of the Kafka infrastructure (broker, topics, etc...)

I want to look at the lineage, provenance and metadata for my flow from data birth to storage.   Atlas is easy to use and integrated with CDP.   Thanks to the automagic configuration done in Cloudera Enterprise Data Cloud - NiFi, Kafka, HDFS, S3, Hive, HBase and more are providing data that comes together in one easy to follow diagram powered by Graphs.

The connection to Atlas is prebuilt for you in Apache NiFi, you can take a look and see.

Using Apache Hue, I can search our tables and produce simple charts.

We push our merged ORC files to /tmp/cnow3 directory in S3 controlled by HDFS and full security for an external Hive table.

It becomes trivial to push data to S3, whether it's compressed JSON files or internal ORC files used in Hive 3 tables.

As part of our output we push sensor readings to Slack for a sampling of current device status.

We can quickly access Cloudera SMM from CDP Data Hub with a single click thanks to Single Sign On.   Once in SMM, we can build alerts based on conditions within clusters, brokers, topics, consumers, producers, replication and more.

We can look at topics and see alerts in one UI.

We can view our alert messages from our history display.

After alerts are triggered, we can see a history of them in the UI for a single pane of glass view of the system.

The Brokers screen shows me totals for all the brokers and individual broker data.

I can browse inside a topic like this one for our sensors data.   I can view the key, offset, timestamp and data.   I can view text, byte, json and AVRO formatted data.   There is also a connection to the schema it used from the Cloudera Schema Registry.

Below is an example email sent via Cloudera SMM for an alert condition on Kafka.

Before we can query the ORC files that we have stored in HDFS, we'll need to create an external Hive table.

We can use Apache Hue or Data Analytics Studio to query our new table.

If you need to connect to a machine, you can SSH into an instance.  

If you need more information, join us in the Cloud, in the Community or up close in virtual Meetups.

Additional Resources