Apache NiFi Load Balancing via Load Balanced Connections

Modern Apache NiFi Load Balancing

In today's Apache NiFi, there is a new and improved means of load balancing data between nodes in your cluster.   With the introduction of NiFi 1.8.0, connection load balancing has been added between every processor in any connection.    You now have an easy to set option for automatically load balancing between your nodes.  The legacy days of using Remote Process
Groups to distribute load between Apache NiFi nodes is over. For maximum flexibility,
performance and ease, please make sure you upgrade your existing flows to
use the built-in Connection Load Balancing.

If you are running newer Apache NiFI or Cloudera Flow Management (CFM), you have had a
better way of distributing processing between processors and servers.   This is for
Apache NiFi 1.8.0 and higher including the newest version 1.9.2.

Note:  Remote Process Groups are no longer necessary for load balancing!
Use actual load balanced connections instead!

Remote Process Groups should only be used for distributing to other clusters.



Apache NiFi Load Balancing 

Since 2018, it's been an awesome feature:  
https://blogs.apache.org/nifi/entry/load-balancing-across-the-cluster


We have a few options for Load Balancing Options, these strategies include
"Round Robin" that during failure conditions data will be rebalanced to
another node.   This can rebalance thousands of flow files per second or
more depending on flow file size.   This is done to give a node a chance to
reconnect and continue processing.



Data Distribution Strategies

Other option is to “Partition by Attribute” and “Single Node”  which will
queue up data until that single node or partitioned node returns.   You
cannot pick which Node in the cluster does that processing for portability
purposes.   We need to be dynamic and elastic, so it just needs to be one
node. This allows for “like data” can be sent to the same node in cluster
which may be necessary for certain use cases.      Using a custom
Attribute Name for this routing can be powerful as well as for Merges
in table loading use cases.  We can also choose to not load balance at all.   

Elastic Scaling for Apache NiFi

An important new feature that was added to NiFi is to allow nodes to be
decommissioned and disconnected from the cluster and all of their data
offloaded.   This is important for Kubernetes and dynamic scaling for
elasticity. Elastic Scaling is important for workloads that differ during
the day or year like once an hour loads or weekly jobs.   Scale up to
meet SLAs and deadlines, but scale down when possible to save cloud
spend! Now NiFi not only solves data problems but saves you cash
money!

Apache NiFi Node Affinity

Remote Process Groups do not support node affinity.    Node affinity is
supported in our Partition by Attribute strategy and has many uses.

Remote Process Groups

To replace the former big use case, we used Remote Process Groups.  
We have a better solution, for a first connection like ListSFTP runs on
one node and the connections can then be "Round Robin".

Important Use Case

This load balancing feature of Apache NiFi shows the power of distributing a large dataset
or unstructured data capture at the edge or other datacenter, split and transfer, then use
attribute affinity to a node to reconstitute the data in a particular order.  
So what happens is sometimes you have a large bulk data export from a system like a
relational database dump in one multiple terabyte file.   We need one NiFi node to load
this file and then split it up into chunks, transfer it and send it to nodes to process. Sometimes
ordering of records will require we use an attribute to keep related chunks (say the same Table) together on one node.

We also see this with a large zip file containing many files of many types. Often there will
be hundreds of files of the multiple types and we may want to route to the same
node based on filename root. That way one NiFi node will be processing all the same file types or
table.   This is now trivial to implement and easy for any NiFi user to examine and see what is
going on in this ETL process.

References



    Migrating Apache Flume Flows to Apache NiFi: JMS To X and X to JMS

    Migrating Apache Flume Flows to Apache NiFi:  JMS To/From Anywhere



    This is a simple use case of being a gateway between JMS and other sources and sinks.   We can do a lot more than that in NiFi.  We can be a JMS Consumer or Producer.  All with No Code.  We can work with topics and queues and any message types you have.   We can turn tabular messages (JSON, CSV, XML, AVRO, Parquet, Grokable Text) into Records and process them at speed with queries, updates, merging and fast record processing that is schema aware.  So we know your fields and types and can validate those for you while real-time querying that data as it is sent from and to JMS topics and queues with Apache Calcite SQL.  We can store your schemas in our Cloudera Schema Registry and allow for REST API access to them.   Schemas are accessible from Spark, Flink, Kafka, NiFi and more.

    It is extremely easy to do this in NiFi.

    In our example we are using Apache ActiveMQ 5.15 as our example JMS Broker.   We are grabbing example data from a few different REST sources and pushing to and from our JMS broker.


    Simple NiFi Flow For Pushing JMS Data to KUDU


    We can monitor our JMS Activity in Apache ActiveMQ's Web Console




    With Apache NiFi We Ingest All the REST Feeds




    These feeds include Coinbase




    NYC Demographics and Live Subway GTFS Data



    Transit Land Feeds and Operators


    World Trading Data



    'Quandl REST Data


    It is easy to Consume JMS messages from Topics or Queues


    Consuming Messages in a snap, We just need to set our Connection Factory Service, Destination and Topic/Queue.




     JMS Connection Factory Settings, Just a Java Class, JAR path and Broker URI.   Yes we support SSL!


    For JMS Queues, pick QUEUE and your QUEUE Name


    Example JMS MetaData Produced including Delivery Mode, Expiration and Message ID




     Consume From a QUEUE


    Consume From A TOPIC



    Let's Push Any and All REST Feed to JMS Topics and Queues










    References

    Migrating Apache Flume Flows to Apache NiFi: Any Relational Database To/From Anywhere

    Migrating Apache Flume Flows to Apache NiFi:  Any Relational Database To/From Anywhere



    This is a simple use case of being a gateway between Relational Databases and other sources and sinks.   We can do a lot more than that in NiFi.  We can SELECT, UPDATE, INSERT, DELETE and run any DML.  All with No Code.   We can also access metadata from an RDBMS and build dynamical ELT systems from that.

    It is extremely easy to do this in NiFi.


    Instead of using Flume, Let's Use Apache NiFi to Move Any Tabular Data To and From Databases




    From A Relational Database (via JDBC Driver) to Anywhere.   In our case, we will pull from an RDBMS and post to Kudu.


    Step 1:  QueryDatabaseTableRecord (Create Connection Pool, Pick DB Type, Table Name, Record Writer)
    Step 2:  PutKudu (Set Kudu Master, Table Name, 
    Done!

    Query Database


    Connect to Kudu




    Let's Write JSON Records That Get Converted to Kudu Records or RDBMS/MySQL/JDBC Records 



    Schema For The Data




    Read All The Records From Our JDBC Database





    Let's Create an Apache Kudu table to Put Database Records To





    Let's Examine the MySQL Table We Want to Read/Write To and From





    Let's Check the MariaDB Table



    MySQL Table Information






    From Anywhere (Say a Device) to A Relational Database (via JDBC Driver).   In our case, we will insert into an RDBMS from Kafka.


    Step 1:  Acquire or modify data say ConsumeKafkaRecord_2
    Step 2:  PutDatabaseRecord (Set Record Reader, INSERT or UPDATE, Connection Pool, Table Name)
    Done!


    Put Database Records in Any JDBC/RDBMS



    Setup Your Connection Pool to SELECT, UPDATE, INSERT or DELETE





    SQL DDL

    Create MariaDB/MySQL Table


    CREATE TABLE iot ( uuid VARCHAR(255) NOT NULL PRIMARY KEY,
     ipaddress VARCHAR(255),top1pct BIGINT, top1 VARCHAR(255),
    cputemp VARCHAR(255), gputemp VARCHAR(255),
     gputempf VARCHAR(255),
    cputempf varchar(255), runtime VARCHAR(255),
    host VARCHAR(255), filename VARCHAR(255),
     imageinput VARCHAR(255),hostname varchar(255),
    macaddress varchar(255), end VARCHAR(255), te VARCHAR(255), systemtime VARCHAR(255),

    cpu BIGINT, diskusage VARCHAR(255), memory BIGINT, id VARCHAR(255));

    Create Kudu Table


    CREATE TABLE iot ( uuid STRING,
     ipaddress STRING,top1pct BIGINT, 
     top1 STRING,
    cputemp STRING, 
    gputemp STRING,
     gputempf STRING,
    cputempf STRING, runtime STRING,
    host STRING, filename STRING,
     imageinput STRING,hostname STRING,
    macaddress STRING, 
    `end` STRING, te STRING, systemtime STRING,
    cpu BIGINT, diskusage STRING, 
    memory BIGINT, 
    id STRING,
    PRIMARY KEY (uuid)
    )
    PARTITION BY HASH PARTITIONS 16 
    STORED AS KUDU 

    TBLPROPERTIES ('kudu.num_tablet_replicas' = '1');

    References

    Using GrovePi with Raspberry Pi and MiNiFi Agents for Data Ingest to Parquet, Kudu, ORC, Kafka, Hive and Impala

    Using GrovePi with Raspberry Pi and MiNiFi Agents for Data Ingest


    Source Code:  https://github.com/tspannhw/minifi-grove-sensors

    Acquiring sensor data from Grove sensors is easy using a GrovePi Hat and some compatible sensors.


    Just before my talk at the Future of Data Meetup @ Bell Works in Holmdel, NJ, I thought I should ingest some data from a grove sensor interface.

    It's so easy a sleeping cat could do it.




    So what does this device look like?  



    I have a temperature and humidity sensor on there.




    The distance sonic sensor is in there too, that's for the next article.




    Let's do this with minimal RAM.




    That's a 64GB hard drive underneath in the white case with the RPI.





    I need more data and BACON.



    We design our MiNiFi Agent Flow in CEM/EFM.   Grab JSON data stream and run sensors.


    Apache NiFi 1.9.2 / CFM 1.0 Received HTTPS S2S Events From MiNiFi Agent




    A simple flow to query and convert our JSON data, then store it to Kudu and HDFS (ORC) as well as push it to Kafka with a schema.




    Let's read that Kafka message and store to Parquet, we will push to MQTT and JMS in the next article.   This is our universal proxy/gateway.



    We could infer a schema and not save it.   But by saving a schema to the schema registry it makes SMM, Kafka, NiFi and others schema aware and easy to automagically query and convert between CSV/JSON/XML/AVRO/Parquet and more.

    Let's store the data in Parquet files on HDFS with an Impala table.   In Apache NiFi 1.10 there is a ParquetWriter



    Before we push to Kafka, let's create a topic for it with Cloudera SMM



    Let's build an impala table for that Kudu data.



    We can query our tables with ease as data rapidly is added.





    Let's Examine the Parquet Files that NiFi Generated





     Let's query that parquet data with Impala in Hue



     Let's monitor that data in Kafka with Cloudera SMM






    That was easy from device to enterprise cloud data store(s) with enterprise messages, security, governance, lineage, data catalog, SDX, monitoring and more.   How easy can you ingest IoT data, query it mid stream and store it in multiple data stores.   It took longer to write the article then to do the project and code.   All graphical, Single Sign On, multiple schemas/verisons/data types/engines, multiple OSs, edge, cloud and laptop.   Easy.

    Table DDL


    CREATE EXTERNAL TABLE IF NOT EXISTS grovesensors2 
    (humidity STRING, uuid STRING, systemtime STRING, runtime STRING, cpu DOUBLE, id STRING, te STRING, host STRING, `end` STRING, 
    macaddress STRING, temperature STRING, diskusage STRING, memory DOUBLE, ipaddress STRING, host_name STRING) 
    STORED AS ORC
    LOCATION '/tmp/grovesensors'

    CREATE TABLE grovesensors ( uuid STRING,  `end` STRING,humidity STRING, systemtime STRING, runtime STRING, cpu DOUBLE, id STRING, te STRING, 
    host STRING,
    macaddress STRING, temperature STRING, diskusage STRING, memory DOUBLE, ipaddress STRING, host_name STRING,
    PRIMARY KEY (uuid, `end`)
    )
    PARTITION BY HASH PARTITIONS 16
    STORED AS KUDU
    TBLPROPERTIES ('kudu.num_tablet_replicas' = '1')

    hdfs dfs -mkdir -p /tmp/grovesensors
    hdfs dfs -mkdir -p /tmp/groveparquet

    CREATE  EXTERNAL TABLE grove_parquet 
     (
     diskusage STRING, 
      memory DOUBLE,  host_name STRING,
      systemtime STRING,
      macaddress STRING,
      temperature STRING,
      humidity STRING,
      cpu DOUBLE,
      uuid STRING,  ipaddress STRING,
      host STRING,
      `end` STRING,  te STRING,
      runtime STRING,
      id STRING
    )
    STORED AS PARQUET
    LOCATION '/tmp/groveparquet/'

    Parquet Format



    message org.apache.nifi.grove {
      optional binary diskusage (STRING);
      optional double memory;
      optional binary host_name (STRING);
      optional binary systemtime (STRING);
      optional binary macaddress (STRING);
      optional binary temperature (STRING);
      optional binary humidity (STRING);
      optional double cpu;
      optional binary uuid (STRING);
      optional binary ipaddress (STRING);
      optional binary host (STRING);
      optional binary end (STRING);
      optional binary te (STRING);
      optional binary runtime (STRING);
      optional binary id (STRING);
    }

    References