Sizing Your Apache NiFi Cluster For Production Workloads
Using Cloudera Data Platform with Flow Management and Streams on Azure
Using Cloudera Data Platform with Flow Management and Streams on Azure
Apache NiFi on Azure CDP Data Hub |
- Streams Messaging Heavy Duty for AWS
- Streams Messaging Heavy Duty for Azure
- Flow Management Heavy Duty for AWS
- Flow Management Heavy Duty for Azure
- Apache Kafka 2.4.1
- Cloudera Schema Registry 0.8.1
- Cloudera Streams Messaging Manager 2.1.0
- Apache NiFi 1.11.4
- Apache NiFi Registry 0.5.0
NiFi and Kafka are autoconfigured to work with Apache Atlas under our environments Data Lake SDX. We can browse through the lineage for all the Kafka topics we use.
- https://www.cloudera.com/about/enterprise-data-cloud.html
https://docs.cloudera.com/cdf-datahub/7.2.0/release-notes/topics/cdf-datahub-whats-new.html
https://dzone.com/articles/lets-build-a-simple-ingest-to-cloud-data-warehouse
https://www.datainmotion.dev/2020/06/no-more-spaghetti-flows.html
The Rise of the Mega Edge (FLaNK)
No More Spaghetti Flows
Spaghetti Flows
You may have heard of: https://en.wikipedia.org/wiki/Spaghetti_code. For Apache NiFi, I have seen some (and have done some of them in the past), I call them Spaghetti Flows.
Let's avoid them. When you are first building a flow it often meanders and has lots of extra steps and extra UpdateAttributes and random routes. This applies if you are running on-premise, in CDP or in other stateful NiFi clusters (or single nodes). The following video from Mark Payne is a must watch before you write any NiFi flows.
Apache NiFi Anti-Patterns with Mark Payne
https://www.youtube.com/watch?v=RjWstt7nRVY
https://www.youtube.com/watch?v=v1CoQk730qs
https://www.youtube.com/watch?v=JbUjYr6Kd3I
https://github.com/tspannhw/EverythingApacheNiFi
Do Not:
Do not Put 1,000 Flows on one workspace.
If your flow has hundreds of steps, this is a Flow Smell. Investigate why.
Do not Use ExecuteProcess, ExecuteScripts or a lot of Groovy scripts as a default, look for existing processors
Do not Use Random Custom Processors you find that have no documentation or are unknown.
Do not forget to upgrade, if you are running anything before Apache NiFi 1.10, upgrade now!
Do not run on default 512M RAM.
Do not run one node and think you have a highly available cluster.
Do not split a file with millions of records to individual records in one shot without checking available space/memory and back pressure.
Use Split processors only as an absolute last resort. Many processors are designed to work on FlowFiles that contain many records or many lines of text. Keeping the FlowFiles together instead of splitting them apart can often yield performance that is improved by 1-2 orders of magnitude.
Do:
Reduce, Reuse, Recycle. Use Parameters to reuse common modules.
Put flows, reusable chunks (write to Slack, Database, Kafka) into separate Process Groups.
Write custom processors if you need new or specialized features
Use Cloudera supported NiFi Processors
Use RecordProcessors everywhere
Read the Docs!
Use the NiFi Registry for version control.
Use NiFi CLI and DevOps for Migrations.
Run a CDP NiFi Datahub or CFM managed 3 or more node cluster.
Walk through your flow and make sure you understand every step and it’s easy to read and follow. Is every processor used? Are there dead ends?
Do run Zookeeper on different nodes from Apache NiFi.
For Cloud Hosted Apache NiFi - go with the "high cpu" instances, such as 8 cores, 7 GB ram.
same flow 'templatized' and deployed many many times with different params in the same instance
Use routing based on content and attributes to allow one flow to handle multiple nearly identical flows is better than deploying the same flow many times with tweaks to parameters in same cluster.
Use the correct driver for your database. There's usually a couple different JDBC drivers.
Make sure you match your Hive version to the NiFi processor for it. There are ones out there for Hive 1 and Hive 3! HiveStreaming needs Hive3 with ACID, ORC. https://community.cloudera.com/t5/Support-Questions/how-to-use-puthivestreaming/td-p/108430
Let's revisit some Best Practices:
Get your Apache NiFi for Dummies. My own NiFi 101.
Here are a few things you should have read and tried before building your first Apache NiFi flow:
https://nifi.apache.org/docs/nifi-docs/html/nifi-in-depth.html
https://www.freecodecamp.org/news/nifi-surf-on-your-dataflow-4f3343c50aa2/
https://www.nifi.rocks/documents/nifi-expression-language-cheat-sheet.pdf
Also when in doubt, use Records! Use Record Processors and use pre-defined schemas, this will be easier to develop, cleaner and more performant. Easier, Faster, Better!!!
There are record processors for Logs (Grok), JSON, AVRO, XML, CSV, Parquet and more.
Look for a processor that has “Record” in the name like PutDatabaseRecord or QueryRecord.
Use the best DevOps processes, testing and tools.
https://www.datainmotion.dev/2019/11/nifi-toolkit-cli-for-nifi-110.html
https://dzone.com/articles/devops-for-apache-nifi-17-and-more
Some newer features in 1.8, 1.9, 1.10, 1.11 that you need to use.
https://blogs.apache.org/nifi/entry/load-balancing-across-the-cluster
https://www.datainmotion.dev/2019/10/apache-nifi-load-balancing-via-load.html
Advanced Articles:
Spaghetti is for eating, not for real-time data streams. Let's keep it that way.
If you are not sure what to do check out the Cloudera Community, NiFi Slack or the NiFi docs. Also I may have a helpful article here. Join me and my NiFi friends at virtual meetups for more in-depth NiFi, Flink, Kafka and more. We keep it interactive so you can feel free to ask questions.
Cloudera Edge Management 1.1.0 Release
Cloudera Edge Management 1.1.1 Release
See: https://docs.cloudera.com/cem/1.1.0/index.html
https://docs.cloudera.com/cem/1.1.0/using-cem/topics/cem-deployment-monitor.html
We can now integrate with Grafana and use Prometheus Metrics:
https://docs.cloudera.com/cem/1.1.0/using-cem/topics/cem-monitoring-grafana-dashboard.html
Cloudera Flow Management 101: Let's Build a Simple REST Ingest to Cloud Datawarehouse With LowCode. Powered by Apache NiFi
Use NiFi to call REST API, transform, route and store the data
Pick any REST API of your choice, but I have walked through this one to grab a number of weather stations reports. Weather or not we have good weather, we can query it anyway.
We are going to build a GenerateFlowFile to feed our REST calls.
[
{"url":"http://weather.gov/xml/current_obs/CWAV.xml"},
{"url":"http://weather.gov/xml/current_obs/KTTN.xml"},
{"url":"http://weather.gov/xml/current_obs/KEWR.xml"},
{"url":"http://weather.gov/xml/current_obs/KEWR.xml"},
{"url":"http://weather.gov/xml/current_obs/CWDK.xml"},
{"url":"http://weather.gov/xml/current_obs/CWDZ.xml"},
{"url":"http://weather.gov/xml/current_obs/CWFJ.xml"},
{"url":"http://weather.gov/xml/current_obs/PAEC.xml"},
{"url":"http://weather.gov/xml/current_obs/PAYA.xml"},
{"url":"http://weather.gov/xml/current_obs/PARY.xml"},
{"url":"http://weather.gov/xml/current_obs/K1R7.xml"},
{"url":"http://weather.gov/xml/current_obs/KFST.xml"},
{"url":"http://weather.gov/xml/current_obs/KSSF.xml"},
{"url":"http://weather.gov/xml/current_obs/KTFP.xml"},
{"url":"http://weather.gov/xml/current_obs/CYXY.xml"},
{"url":"http://weather.gov/xml/current_obs/KJFK.xml"},
{"url":"http://weather.gov/xml/current_obs/KISP.xml"},
{"url":"http://weather.gov/xml/current_obs/KLGA.xml"},
{"url":"http://weather.gov/xml/current_obs/KNYC.xml"},
{"url":"http://weather.gov/xml/current_obs/KJRB.xml"}
]
So we are using ${url} which will be one of these. Feel free to pick your favorite airports or locations near you. https://w1.weather.gov/xml/current_obs/index.xml
If you wish to choose your own data adventure, you can pick one of these others. You will have to build your own table if you wish to store it. They return CSV, JSON or XML, since we have record processors we don’t care. Just know which you pick.
Then we will use SplitJSON to split the JSON records into single rows.
Then use EvaluateJSONPath to extract the URL.
Now we are going to call those REST URLs with InvokeHTTP.
You will need to create a Standard SSL controller.
This is the default JDK JVM on Mac or some Centos 7. You may have a real password, if so you are awesome. If you don't know it, that's rough. You can build a new one with SSL.
For more cloud ingest fun, https://docs.cloudera.com/cdf-datahub/7.1.0/howto-data-ingest.html.
SSL Defaults (In CDP Datahub, one is built for you automagically, thanks Michael).
Truststore filename: /usr/lib/jvm/java-openjdk/jre/lib/security/cacerts
Truststore password: changeit
Truststore type: JKS
TLS Protocol: TLS
StandardSSLContextService for Your GET ${url}
Then we are going to run a query to convert these and route based on our queries.
Example query on the current NOAA weather observations to look for temperature in fareneheit below 60 degrees. You can make a query with any of the fields in the where cause. Give it a try!
You will need to set the Record Writer and Record Reader:
Record Reader: XML
Record Writer: JSON
SELECT * FROM FLOWFILE
WHERE temp_f <= 60
SELECT * FROM FLOWFILE
Now we are splitting into three concurrent paths. This shows the power of Apache NiFi. We will write to Kudu, HDFS and Kafka.
For the results of our cold path (temp_f ⇐60), we will write to a Kudu table.
Kudu Masters: edge2ai-1.dim.local:7051 Table Name: impala::default.weatherkudu Record Reader: Infer Json Tree Reader Kudu Operation Type: UPSERT
Before you run this, go to Hue and build the table.
CREATE TABLE weatherkudu
(`location` STRING,`observation_time` STRING, `credit` STRING, `credit_url` STRING, `image` STRING, `suggested_pickup` STRING, `suggested_pickup_period` BIGINT,
`station_id` STRING, `latitude` DOUBLE, `longitude` DOUBLE, `observation_time_rfc822` STRING, `weather` STRING, `temperature_string` STRING,
`temp_f` DOUBLE, `temp_c` DOUBLE, `relative_humidity` BIGINT, `wind_string` STRING, `wind_dir` STRING, `wind_degrees` BIGINT, `wind_mph` DOUBLE, `wind_gust_mph` DOUBLE, `wind_kt` BIGINT,
`wind_gust_kt` BIGINT, `pressure_string` STRING, `pressure_mb` DOUBLE, `pressure_in` DOUBLE, `dewpoint_string` STRING, `dewpoint_f` DOUBLE, `dewpoint_c` DOUBLE, `windchill_string` STRING,
`windchill_f` BIGINT, `windchill_c` BIGINT, `visibility_mi` DOUBLE, `icon_url_base` STRING, `two_day_history_url` STRING, `icon_url_name` STRING, `ob_url` STRING, `disclaimer_url` STRING,
`copyright_url` STRING, `privacy_policy_url` STRING,
PRIMARY KEY (`location`, `observation_time`)
)
PARTITION BY HASH PARTITIONS 4
STORED AS KUDU
TBLPROPERTIES ('kudu.num_tablet_replicas' = '1');
The Second fork is to Kafka, this will be for the 'all' path.
Kafka Brokers: edge2ai-1.dim.local:9092 Topic: weather Reader & Writer: reuse the JSON ones
The Third and final fork is to HDFS (could be ontop of S3 or Blob Storage) as Apache ORC files. This will also autogenerate the DDL for an external Hive table as an attribute, check your provenance after running.
JSON in and out for record readers/writers, you can adjust the time and size of your batch or use defaults.
Hadoop Config: /etc/hadoop/conf/hdfs-site.xml,/etc/hadoop/conf/core-site.xml Record Reader: Infer Json Directory: /tmp/weather Table Name: weather
Before we run, build the /tmp/weather directory in HDFS and give it 777 permissions. We can do this with Apache Hue.
Once we run we can get the table DDL and location:
Go to Hue to create your table.
CREATE EXTERNAL TABLE IF NOT EXISTS `weather`
(`credit` STRING, `credit_url` STRING, `image` STRUCT<`url`:STRING, `title`:STRING, `link`:STRING>, `suggested_pickup` STRING, `suggested_pickup_period` BIGINT,
`location` STRING, `station_id` STRING, `latitude` DOUBLE, `longitude` DOUBLE, `observation_time` STRING, `observation_time_rfc822` STRING, `weather` STRING, `temperature_string` STRING,
`temp_f` DOUBLE, `temp_c` DOUBLE, `relative_humidity` BIGINT, `wind_string` STRING, `wind_dir` STRING, `wind_degrees` BIGINT, `wind_mph` DOUBLE, `wind_gust_mph` DOUBLE, `wind_kt` BIGINT,
`wind_gust_kt` BIGINT, `pressure_string` STRING, `pressure_mb` DOUBLE, `pressure_in` DOUBLE, `dewpoint_string` STRING, `dewpoint_f` DOUBLE, `dewpoint_c` DOUBLE, `windchill_string` STRING,
`windchill_f` BIGINT, `windchill_c` BIGINT, `visibility_mi` DOUBLE, `icon_url_base` STRING, `two_day_history_url` STRING, `icon_url_name` STRING, `ob_url` STRING, `disclaimer_url` STRING,
`copyright_url` STRING, `privacy_policy_url` STRING)
STORED AS ORC
LOCATION '/tmp/weather'
You can now use Apache Hue to query your tables and do some weather analytics. When we are upserting into Kudu we are ensuring no duplicate reports for a weather station and observation time.
select `location`, weather, temp_f, wind_string, dewpoint_string, latitude, longitude, observation_time
from weatherkudu
order by observation_time desc, station_id asc
select *
from weather
In Atlas, we can see the flow.