Populating Apache Phoenix HBase Tables and Apache Hive Tables from RDBMS in real-time with streaming from Apache NiFi
Populating Apache Phoenix HBase Tables and Apache Hive Tables from RDBMS in real-time with streaming from Apache NiFi.
INGESTING RDBMS DATA
I previously posted an article on ingesting and converting data (https://community.hortonworks.com/articles/64069/converting-a-large-json-file-into-csv.html). Once you have a SQL database loaded, you will eventually need to store your data in your one unified datalake. This is quite simple with NiFi. If you have a specialized tool that reads from your RDBMS logs and sends them to Kafka or JMS, that would be easy to ingest as well. For those wishing to stay open source, NiFi works great. If you don't have a good increasing key to use, you can add an article one that increases on every insert. Almost every database supports this from MariaDB to Oracle.
ALTER TABLE `useraccount` ADD COLUMN `id` INT AUTO_INCREMENT UNIQUE FIRST;
For mine, I just added an autoincrement id column to be my trigger.
For Apache NiFi, you will need connections to all your sources and sinks. So I need a DB Connection Pool for Apache Phoenix and MySQL (DBCPConnectionPool) as well as Hive (HiveConnectionPool).
RDMS (I am using MySQL)
HDF 2.0 (NiFi 1.0.0+)
HDP 2.4+ (I am using HDP 2.5) with HBase and Phoenix enabled and running, HDFS, YARN and Hive running.
Optional: Apache Zeppelin for quick data analysis and validation
To build a SQL database, I needed a source of interesting and plentiful data.
So I used the excellent free API: https://api.randomuser.me/. It's easy to get this URL to return 5,000 formatted JSON results via the extra parameters: ?results=3&format=pretty.
This API returns JSON in this format that requires some basic transformation (easily done in NiFi).