Showing posts with label apache-nifi. Show all posts
Showing posts with label apache-nifi. Show all posts

Sizing Your Apache NiFi Cluster For Production Workloads

Sizing Your Apache NiFi Cluster For Production Workloads

Cloudera Flow Management provides an enterprise edition of support Apache NiFi managed by Cloudera Manager.    The official documentation provides a great guide for sizing your cluster.

https://docs.cloudera.com/cfm/2.0.1/nifi-sizing/topics/cfm-nifi-sizing.html

If the use case fits, NiFi Stateless Engine may fit and perform better utilizing no disk.

Check out that heap usage and utilization, you may need to increase.    24-32 Gigabytes of RAM is a nice sweet spot for most instances.



Check out how your nodes, threads and queues are doing.   If queue is not processing fast or thread count is high, you may need more cores, RAM or nodes.



When you are managing your cluster in Cloudera Manager, make sure you increase the default JVM memory for Apache NiFi.  512MB is not going to cut it for anything but single user development.



Do this correctly and process a billion events!!!   https://blog.cloudera.com/benchmarking-nifi-performance-and-scalability/.  - Notice the hardware and performance sections of that article


General tips:

Make sure you use SSD for Provenance and other repositories.  Faster disk, happier user. https://docs.cloudera.com/cfm/2.0.1/nifi-sizing/topics/cfm-sizing-disk-configuration.html

Monitor your flows to see how much resources you need:  https://www.datainmotion.dev/2020/07/report-on-this-apache-nifi-1114-monitor.html.


Use Records, if it's semistructured GrokReader can help.   https://www.nifi.rocks/record-path-cheat-sheet/  If it's CSV, JSON, XML, Parquet, Logs then use Readers and writers.   They are much faster, easier and cleaner.



Minimize use of CPU or Memory intensive processors (or make a not of them during sizing):   https://docs.cloudera.com/cfm/2.0.1/nifi-sizing/topics/cfm-sizing-resource-intensive-processors.html

There are a few decisions to make on repositories, talk to your Cloudera friends.    https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.5.1/nifi-configuration-best-practices/content/configuration-best-practices.html



Using Cloudera Data Platform with Flow Management and Streams on Azure


Using Cloudera Data Platform with Flow Management and Streams on Azure

Today I am going to be walking you through using Cloudera Data Platform (CDP) with Flow Management and Streams on Azure Cloud.  To see a streaming demo video, please join my webinar (or see it on demand) at Streaming Data Pipelines with CDF in Azure.  I'll share some additional how-to videos on using Apache NiFi and Apache Kafka in Azure very soon.   



Apache NiFi on Azure CDP Data Hub
Sensors to ADLS/HDFS and Kafka




In the above process group we are using QueryRecord to segment JSON records and only pick ones where the Temperature in Fahrenheit is over 80 degrees then we pick out a few attributes to display from the record and send them to a slack channel.

To become a Kafka Producer you set a Record Reader for the type coming in, this is JSON in my case and then set a Record Writer for the type to send to the sensors topic.    In this case we kept it as JSON, but we could convert to AVRO.   I usually do that if I am going to be reading it with Cloudera Kafka Connect.



Our security is automagic and requires little for you to do in NiFi.   I put in my username and password from CDP.   The SSL context is setup for my when I create my datahub.


When I am writing to our Real-Time Data Mart (Apache Kudu), I enter my Kudu servers that I copied from the Kudu Data Mart Hardware page, put in my table name and your login info.   I recommend UPSERT and use your Record Reader JSON.


For real use cases, you will need to spin up:

Public Cloud Data Hubs:
  • Streams Messaging Heavy Duty for AWS
  • Streams Messaging Heavy Duty for Azure
  • Flow Management Heavy Duty for AWS
  • Flow Management Heavy Duty for Azure
Software:
  • Apache Kafka 2.4.1
  • Cloudera Schema Registry 0.8.1
  • Cloudera Streams Messaging Manager 2.1.0
  • Apache NiFi 1.11.4
  • Apache NiFi Registry 0.5.0
Demo Source Code:


Let's configure out Data Hubs in CDP in an Azure Environment.   It is a few clicks and some naming and then it builds.












Under the Azure Portal


In Azure, we can examine the files we uploaded to the Azure object store.





Under the Data Lake SDX


NiFi and Kafka are autoconfigured to work with Apache Atlas under our environments Data Lake SDX.  We can browse through the lineage for all the Kafka topics we use.






We can also see the flow for NiFi, HDFS and Kudu.

SMM

We can examine all of our Kafka infrastructure from Kafka Brokers, Topics, Consumers, Producers, Latency and Messages.  We can also create and update topics.




Cloudera Manager

We still have access to all of our traditional items like Cloudera Manager to manage configuration of servers.



Under Real-Time Data Mart

We can view tables, create tables and query our table.   Apache Hue is a great tool for accessing data in my Real-Time Data Mart in a datahub.



We can also look at table details in the Impala UI.


References
©2020 Timothy Spann



The Rise of the Mega Edge (FLaNK)

At one point edge devices were cheap, low energy and low powered.   They may have some old WiFi and a single core CPU running pretty slow.    Now power, memory, GPUs, custom processors and substantial power has come to the edge.

Sitting on my desk is the NVidia Xaver NX which is the massively powerful machine that can easily be used for edge computing while sporting 8GB of fast RAM, a 384 NVIDIA CUDA® cores and 48 Tensor cores GPU, a 6 core 64-bit ARM CPU and is fast.   This edge device would make a great workstation and is now something that can be affordably deployed in trucks, plants, sensors and other Edge and IoT applications.  


Next that titan device is the inexpensive hobby device, the Raspberry Pi 4 that now sports 8 GB of LPDDR4 RAM, 4 core 64-bit ARM CPU and is speedy!   It can also be augmented with a Google Coral TPU or Intel Movidius 2 Neural Compute Stick.   


These boxes come with fast networking, bluetooth and the modern hardware running in small edge devices that can now deployed en masse.    Enabling edge computing, fast data capture, smart processing and integration with servers and cloud services.    By adding Apache NiFi's subproject MiNiFi C++ and Java agents we can easily integrate these powerful devices into a Streaming Data Pipeline.   We can now build very powerful flows from edge to cloud with Apache NiFi, Apache Flink, Apache Kafka  (FLaNK) and Apache NiFi - MiNiFi.    I can run AI, Deep Learning, Machine Learning including Apache MXNet, DJL, H2O, TensorFlow, Apache OpenNLP and more at any and all parts of my data pipeline.   I can push models to my edge device that now has a powerful GPU/TPU and adequate CPU, networking and RAM to do more than simple classification.    The NVIDIA Jetson Xavier NX will run multiple real-time inference streams at 60 fps on multiple cameras.  

I can run live SQL against these events at every segment of the data pipeline and combine with machine learning, alert checks and flow programming.   It's now easy to build and deploy applications from edge to cloud.

I'll be posting some examples in my next article showing some simple examples.

By next year, 12 or 16 GB of RAM may be a common edge device RAM, perhaps 2 CPUs with 8 cores, multiple GPUs and large fast SSD storage.   My edge swarm may be running much of my computing power as my flows running elastically on public and private cloud scale up and down based on demand in real-time.


No More Spaghetti Flows

Spaghetti Flows




You may have heard of:   https://en.wikipedia.org/wiki/Spaghetti_code.   For Apache NiFi, I have seen some (and have done some of them in the past), I call them Spaghetti Flows.


Let's avoid them.   When you are first building a flow it often meanders and has lots of extra steps and extra UpdateAttributes and random routes. This applies if you are running on-premise, in CDP or in other stateful NiFi clusters (or single nodes). The following video from Mark Payne is a must watch before you write any NiFi flows.


Apache NiFi Anti-Patterns with Mark Payne


https://www.youtube.com/watch?v=RjWstt7nRVY

https://www.youtube.com/watch?v=v1CoQk730qs

https://www.youtube.com/watch?v=JbUjYr6Kd3I

https://github.com/tspannhw/EverythingApacheNiFi 



Do Not:

  • Do not Put 1,000 Flows on one workspace.

  • If your flow has hundreds of steps, this is a Flow Smell.   Investigate why.

  • Do not Use ExecuteProcess, ExecuteScripts or a lot of Groovy scripts as a default, look for existing processors

  • Do not Use Random Custom Processors you find that have no documentation or are unknown.

  • Do not forget to upgrade, if you are running anything before Apache NiFi 1.10, upgrade now!

  • Do not run on default 512M RAM.

  • Do not run one node and think you have a highly available cluster.

  • Do not split a file with millions of records to individual records in one shot without checking available space/memory and back pressure.

  • Use Split processors only as an absolute last resort. Many processors are designed to work on FlowFiles that contain many records or many lines of text. Keeping the FlowFiles together instead of splitting them apart can often yield performance that is improved by 1-2 orders of magnitude.


Do:

  • Reduce, Reuse, Recycle.    Use Parameters to reuse common modules.

  • Put flows, reusable chunks (write to Slack, Database, Kafka) into separate Process Groups.

  • Write custom processors if you need new or specialized features

  • Use Cloudera supported NiFi Processors

  • Use RecordProcessors everywhere

  • Read the Docs!

  • Use the NiFi Registry for version control.

  • Use NiFi CLI and DevOps for Migrations.

  • Run a CDP NiFi Datahub or CFM managed 3 or more node cluster.

  • Walk through your flow and make sure you understand every step and it’s easy to read and follow.   Is every processor used?   Are there dead ends?

  • Do run Zookeeper on different nodes from Apache NiFi.

  • For Cloud Hosted Apache NiFi - go with the "high cpu" instances, such as 8 cores, 7 GB ram.

  • same flow 'templatized' and deployed many many times with different params in the same instance

  • Use routing based on content and attributes to allow one flow to handle multiple nearly identical flows is better than deploying the same flow many times with tweaks to parameters in same cluster.

  • Use the correct driver for your database.   There's usually a couple different JDBC drivers.

  • Make sure you match your Hive version to the NiFi processor for it.   There are ones out there for Hive 1 and Hive 3!   HiveStreaming needs Hive3 with ACID, ORC.  https://community.cloudera.com/t5/Support-Questions/how-to-use-puthivestreaming/td-p/108430


Let's revisit some Best Practices:


https://medium.com/@abdelkrim.hadjidj/best-practices-for-using-apache-nifi-in-real-world-projects-3-takeaways-1fe6912101db


Get your Apache NiFi for Dummies.   My own NiFi 101.


Here are a few things you should have read and tried before building your first Apache NiFi flow:

Also when in doubt, use Records!  Use Record Processors and use pre-defined schemas, this will be easier to develop, cleaner and more performant. Easier, Faster, Better!!!


There are record processors for Logs (Grok), JSON, AVRO, XML, CSV, Parquet and more.


Look for a processor that has “Record” in the name like PutDatabaseRecord or QueryRecord.


Use the best DevOps processes, testing and tools.

Some newer features in 1.8, 1.9, 1.10, 1.11 that you need to use.

Advanced Articles:

Spaghetti is for eating, not for real-time data streams.   Let's keep it that way.


If you are not sure what to do check out the Cloudera Community, NiFi Slack or the NiFi docs.   Also I may have a helpful article here. Join me and my NiFi friends at virtual meetups for more in-depth NiFi, Flink, Kafka and more. We keep it interactive so you can feel free to ask questions.


Note:   In this picture I am in Italy doing spaghetti research.


Cloudera Flow Management 101: Let's Build a Simple REST Ingest to Cloud Datawarehouse With LowCode. Powered by Apache NiFi




Use NiFi to call REST API, transform, route and store the data


Pick any REST API of your choice, but I have walked through this one to grab a number of weather stations reports.  Weather or not we have good weather, we can query it anyway.

workshopoverview


We are going to build a GenerateFlowFile to feed our REST calls.


generateflowfile
[
{"url":"http://weather.gov/xml/current_obs/CWAV.xml"},
{"url":"http://weather.gov/xml/current_obs/KTTN.xml"},
{"url":"http://weather.gov/xml/current_obs/KEWR.xml"},
{"url":"http://weather.gov/xml/current_obs/KEWR.xml"},
{"url":"http://weather.gov/xml/current_obs/CWDK.xml"},
{"url":"http://weather.gov/xml/current_obs/CWDZ.xml"},
{"url":"http://weather.gov/xml/current_obs/CWFJ.xml"},
{"url":"http://weather.gov/xml/current_obs/PAEC.xml"},
{"url":"http://weather.gov/xml/current_obs/PAYA.xml"},
{"url":"http://weather.gov/xml/current_obs/PARY.xml"},
{"url":"http://weather.gov/xml/current_obs/K1R7.xml"},
{"url":"http://weather.gov/xml/current_obs/KFST.xml"},
{"url":"http://weather.gov/xml/current_obs/KSSF.xml"},
{"url":"http://weather.gov/xml/current_obs/KTFP.xml"},
{"url":"http://weather.gov/xml/current_obs/CYXY.xml"},
{"url":"http://weather.gov/xml/current_obs/KJFK.xml"},
{"url":"http://weather.gov/xml/current_obs/KISP.xml"},
{"url":"http://weather.gov/xml/current_obs/KLGA.xml"},
{"url":"http://weather.gov/xml/current_obs/KNYC.xml"},
{"url":"http://weather.gov/xml/current_obs/KJRB.xml"}
]

So we are using ${url} which will be one of these. Feel free to pick your favorite airports or locations near you. https://w1.weather.gov/xml/current_obs/index.xml

If you wish to choose your own data adventure, you can pick one of these others. You will have to build your own table if you wish to store it. They return CSV, JSON or XML, since we have record processors we don’t care. Just know which you pick.

Then we will use SplitJSON to split the JSON records into single rows.

splitjson

Then use EvaluateJSONPath to extract the URL.

evaluatejsonpath2

Now we are going to call those REST URLs with InvokeHTTP.

You will need to create a Standard SSL controller.

enablessl
standardSSL
sslcontext


This is the default JDK JVM on Mac or some Centos 7.   You may have a real password, if so you are awesome.   If you don't know it, that's rough.   You can build a new one with SSL.

For more cloud ingest fun, https://docs.cloudera.com/cdf-datahub/7.1.0/howto-data-ingest.html.

SSL Defaults (In CDP Datahub, one is built for you automagically, thanks Michael).

Truststore filename: /usr/lib/jvm/java-openjdk/jre/lib/security/cacerts 

Truststore password: changeit 

Truststore type: JKS 

TLS Protocol: TLS


StandardSSLContextService for Your GET ${url}

invokehttp



We can tweak these defaults.
invokehttp2

Then we are going to run a query to convert these and route based on our queries.

Example query on the current NOAA weather observations to look for temperature in fareneheit below 60 degrees. You can make a query with any of the fields in the where cause. Give it a try!

queryRecord


You will need to set the Record Writer and Record Reader:

Record Reader: XML 

Record Writer: JSON


jsonwriter
SELECT * FROM FLOWFILE
WHERE temp_f <= 60
SELECT * FROM FLOWFILE

Now we are splitting into three concurrent paths. This shows the power of Apache NiFi. We will write to Kudu, HDFS and Kafka.

For the results of our cold path (temp_f ⇐60), we will write to a Kudu table.

putkudu


Kudu Masters: edge2ai-1.dim.local:7051 Table Name: impala::default.weatherkudu Record Reader: Infer Json Tree Reader Kudu Operation Type: UPSERT

Before you run this, go to Hue and build the table.


huechooseimpala
huecreateweatherkudu
CREATE TABLE weatherkudu
(`location` STRING,`observation_time` STRING, `credit` STRING, `credit_url` STRING, `image` STRING, `suggested_pickup` STRING, `suggested_pickup_period` BIGINT,
`station_id` STRING, `latitude` DOUBLE, `longitude` DOUBLE,  `observation_time_rfc822` STRING, `weather` STRING, `temperature_string` STRING,
`temp_f` DOUBLE, `temp_c` DOUBLE, `relative_humidity` BIGINT, `wind_string` STRING, `wind_dir` STRING, `wind_degrees` BIGINT, `wind_mph` DOUBLE, `wind_gust_mph` DOUBLE, `wind_kt` BIGINT,
`wind_gust_kt` BIGINT, `pressure_string` STRING, `pressure_mb` DOUBLE, `pressure_in` DOUBLE, `dewpoint_string` STRING, `dewpoint_f` DOUBLE, `dewpoint_c` DOUBLE, `windchill_string` STRING,
`windchill_f` BIGINT, `windchill_c` BIGINT, `visibility_mi` DOUBLE, `icon_url_base` STRING, `two_day_history_url` STRING, `icon_url_name` STRING, `ob_url` STRING, `disclaimer_url` STRING,
`copyright_url` STRING, `privacy_policy_url` STRING,
PRIMARY KEY (`location`, `observation_time`)
)
PARTITION BY HASH PARTITIONS 4
STORED AS KUDU
TBLPROPERTIES ('kudu.num_tablet_replicas' = '1');

Let it run and query it.   Kudu table queried via Impala, try it in Hue.

huequeryweatherkudu


The Second fork is to Kafka, this will be for the 'all' path.


publishKafka


Kafka Brokers: edge2ai-1.dim.local:9092 Topic: weather Reader & Writer: reuse the JSON ones

The Third and final fork is to HDFS (could be ontop of S3 or Blob Storage) as Apache ORC files. This will also autogenerate the DDL for an external Hive table as an attribute, check your provenance after running.

mergerecord


JSON in and out for record readers/writers, you can adjust the time and size of your batch or use defaults.

putorc
putorc1
putorc2


Hadoop Config: /etc/hadoop/conf/hdfs-site.xml,/etc/hadoop/conf/core-site.xml Record Reader: Infer Json Directory: /tmp/weather Table Name: weather

Before we run, build the /tmp/weather directory in HDFS and give it 777 permissions. We can do this with Apache Hue.


createhdfsdir
changepermissionshdfsdir

Once we run we can get the table DDL and location:

putOrcProvenanceWeather


Go to Hue to create your table.


huetohive
CREATE EXTERNAL TABLE IF NOT EXISTS `weather`
(`credit` STRING, `credit_url` STRING, `image` STRUCT<`url`:STRING, `title`:STRING, `link`:STRING>, `suggested_pickup` STRING, `suggested_pickup_period` BIGINT,
`location` STRING, `station_id` STRING, `latitude` DOUBLE, `longitude` DOUBLE, `observation_time` STRING, `observation_time_rfc822` STRING, `weather` STRING, `temperature_string` STRING,
`temp_f` DOUBLE, `temp_c` DOUBLE, `relative_humidity` BIGINT, `wind_string` STRING, `wind_dir` STRING, `wind_degrees` BIGINT, `wind_mph` DOUBLE, `wind_gust_mph` DOUBLE, `wind_kt` BIGINT,
`wind_gust_kt` BIGINT, `pressure_string` STRING, `pressure_mb` DOUBLE, `pressure_in` DOUBLE, `dewpoint_string` STRING, `dewpoint_f` DOUBLE, `dewpoint_c` DOUBLE, `windchill_string` STRING,
`windchill_f` BIGINT, `windchill_c` BIGINT, `visibility_mi` DOUBLE, `icon_url_base` STRING, `two_day_history_url` STRING, `icon_url_name` STRING, `ob_url` STRING, `disclaimer_url` STRING,
`copyright_url` STRING, `privacy_policy_url` STRING)
STORED AS ORC
LOCATION '/tmp/weather'
weatherhdfslist

You can now use Apache Hue to query your tables and do some weather analytics. When we are upserting into Kudu we are ensuring no duplicate reports for a weather station and observation time.

select `location`, weather, temp_f, wind_string, dewpoint_string, latitude, longitude, observation_time
from weatherkudu
order by observation_time desc, station_id asc
select *
from weather
lab3flow


In Atlas, we can see the flow.

atlasTopic