DataFlow Processors Cheat Sheet

 Processors Available in Cloudera DataFlow Designer

I put together a list of some of the processors available in the April 2023 version of flow designer as a quick document list.












































Documentation


Available ReadyFlows

Using a ReadyFlow to build your data flow allows you to get started with CDF quickly and easily. A ReadyFlow is a flow definition template optimized to work with a specific CDP source and destination. So instead of spending your time on building the data flow in NiFi, you can focus on deploying your flow and defining the right KPIs for easy monitoring.

The ReadyFlow Gallery is where you can find out-of-box flow definitions. To use a ReadyFlow, add it to the Catalog and then use it to create a Flow Deployment.

This ReadyFlow consumes JSON, CSV or Avro data from a source ADLS container and transforms the data into Avro files before writing it to another ADLS container.

You can use the Airtable to S3/ADLS ReadyFlow to consume objects from an Airtable table, filter them, and write them as JSON, CSV or Avro files to a destination in Amazon S3 or Azure Data Lake Service (ADLS).

You can use the Azure Event Hub to ADLS ReadyFlow to ingest JSON, CSV or Avro files from an Azure Event Hub namespace, optionally parsing the schema using CDP Schema Registry or direct schema input. The flow then filters records based on a user-provided SQL query and writes them to a target Azure Data Lake Storage (ADLS) location in the specified output data format.

You can use the Box to S3/ADLS ReadyFlow to move data from a source Box location to a destination Amazon S3 bucket or Azure Data Lake Storage (ADLS) location.

This ReadyFlow consumes JSON, CSV or Avro data from a source Kafka topic in Confluent Cloud and parses the schema by looking up the schema name in the Confluent Schema Registry. The filtered events are then written to the destination S3 or ADLS location.

This ReadyFlow consumes JSON, CSV or Avro data from a source Kafka topic in Confluent Cloud and filters records based on a user-provided SQL query before writing it to a Snowflake table.

You can use the Dropbox to S3/ADLS ReadyFlow to ingest data from Dropbox and write it to a destination in Amazon S3 or Azure Data Lake Service (ADLS).

You can use the Google Drive to S3/ADLS ReadyFlow to ingest data from a Google Drive location to a destination in Amazon S3 or Azure Data Lake Service (ADLS).

This ReadyFlow consumes change data from the Wikipedia API and converts JSON events to Avro, filtering and merging them before a writing a file to local disk.

This ReadyFlow retrieves objects from a Private HubSpot App, converting them to the specified output data format and writing them to the target S3 or ADLS destination.

This ReadyFlow moves data between database tables, filtering records by means of an SQL query.

This ReadyFlow consumes data from a source database table and filters events based on a user-provided SQL query before writing it to a destination Amazon S3 or Azure Data Lake Storage (ADLS) location in the specified output data format.

This ReadyFlow consumes JSON, CSV, or Avro data from a source Kafka topic and parses the schema by looking up the schema name in the CDP Schema Registry. You can filter events by specifying a SQL query in the Filter Rule parameter.

This ReadyFlow consumes JSON, CSV or Avro data from a source Kafka topic and merges the events into Avro files before writing the data to ADLS. The flow writes out a file every time its size has either reached 100 MB or five minutes have passed.

This ReadyFlow consumes JSON, CSV or Avro data from a source Kafka topic, parses the schema by looking up the schema name in the CDP Schema Registry and ingests it into an HBase table in COD.

This ReadyFlow consumes JSON, CSV, or Avro data from a source Kafka topic, parses the schema by looking up the schema name in the CDP Schema Registry, and ingests data into an Iceberg table in Hive.

This ReadyFlow consumes JSON, CSV, or Avro data from a source Kafka topic and parses the schema by looking up the schema name in the CDP Schema Registry.

This ReadyFlow consumes JSON, CSV or Avro data from a source Kafka topic, parses the schema by looking up the schema name in the CDP Schema Registry and ingests it into a Kudu table.

This ReadyFlow consumes JSON, CSV or Avro data from a source Kafka topic and merges the events into Avro files before writing the data to S3. The flow writes out a file every time its size has either reached 100 MB or five minutes have passed.

This ReadyFlow listens to a JSON, CSV or Avro data stream on a specified port and parses the schema by looking up the schema name in the CDP Schema Registry. You can filter events by specifying a SQL query. The filtered events are then converted to the specified output data format and written to the destination CDP Kafka topic.

This ReadyFlow listens to a Syslog data stream on a specified port. You can filter events by specifying a SQL query. The filtered events are then converted to the specified output data format and written to the target S3 or ADLS destination.

This ReadyFlow listens to a JSON, CSV or Avro data stream and parses the data based on a specified Avro-formatted schema. You can filter events by specifying a SQL query. The filtered events are then converted to the specified output data format and written to the target S3 or ADLS destination.

This ReadyFlow consumes JSON, CSV or Avro data from a source MQTT topic. You can filter events by specifying a SQL query. The filtered events are then converted to the specified output data format and written to the destination Kafka topic.

This ReadyFlow moves data between non-CDP managed source and CDP-managed destination ADLS locations.

This ReadyFlow moves data between non-CDP managed source and CDP-managed destination S3 locations.

This ReadyFlow consumes JSON, CSV or Avro data from a source S3 bucket and transforms the data into Avro files before writing it to another S3 bucket.

This ReadyFlow consumes JSON, CSV or Avro data from a source S3 bucket and transforms the data into Avro files before writing it to another S3 bucket. The ReadyFLow is configured with notifications about new files that arrive in the sourcce AWS bucket.

You can use the Salesforce filter to S3/ADLS ReadyFlow to consume objects from a Salesforce database table, filter them, and write the data as JSON, CSV or Avro files to a destination in Amazon S3 or Azure Data Lake Service (ADLS).

This ReadyFlow consumes objects from a Custom Shopify App, converts them to the specified output data format, and writes them to a CDP managed destination S3 or ADLS location.