Skip to main content

Populating Apache Phoenix HBase Tables and Apache Hive Tables from RDBMS in real-time with streaming from Apache NiFi

Populating Apache Phoenix HBase Tables and Apache Hive Tables from RDBMS in real-time with streaming from Apache NiFi.

Article

INGESTING RDBMS DATA
I previously posted an article on ingesting and converting data (https://community.hortonworks.com/articles/64069/converting-a-large-json-file-into-csv.html). Once you have a SQL database loaded, you will eventually need to store your data in your one unified datalake. This is quite simple with NiFi. If you have a specialized tool that reads from your RDBMS logs and sends them to Kafka or JMS, that would be easy to ingest as well. For those wishing to stay open source, NiFi works great. If you don't have a good increasing key to use, you can add an article one that increases on every insert. Almost every database supports this from MariaDB to Oracle.
  1. ALTER TABLE `useraccount` ADD COLUMN `id` INT AUTO_INCREMENT UNIQUE FIRST;
For mine, I just added an autoincrement id column to be my trigger.
For Apache NiFi, you will need connections to all your sources and sinks. So I need a DB Connection Pool for Apache Phoenix and MySQL (DBCPConnectionPool) as well as Hive (HiveConnectionPool).
Tools Required:
  • RDMS (I am using MySQL)
  • HDF 2.0 (NiFi 1.0.0+)
  • HDP 2.4+ (I am using HDP 2.5) with HBase and Phoenix enabled and running, HDFS, YARN and Hive running.
  • Optional: Apache Zeppelin for quick data analysis and validation
To build a SQL database, I needed a source of interesting and plentiful data.
So I used the excellent free API: https://api.randomuser.me/. It's easy to get this URL to return 5,000 formatted JSON results via the extra parameters: ?results=3&format=pretty.
This API returns JSON in this format that requires some basic transformation (easily done in NiFi).
  1. {"results":[
  2. {"gender":"male",
  3. "name":{"title":"monsieur","first":"lohan","last":"marchand"},
  4. "location":{"street":"6684 rue jean-baldassini","city":"auboranges","state":"schwyz","postcode":9591},
  5. "email":"lohan.marchand@example.com",
  6. "login":{"username":"biggoose202","password":"esther","salt":"QIU1HBsr","md5":"9e60da6d4490cd6d102e8010ac98f283","sha1":"3de3ea419da1afe5c83518f8b46f157895266d17","sha256":"c6750c1a5bd18cac01c63d9e58a57d75520861733666ddb7ea6e767a7460479b"},
  7. "dob":"1965-01-28 03:56:58",
  8. "registered":"2014-07-26 11:06:46",
  9. "phone":"(849)-890-5523",
  10. "cell":"(395)-127-9369",
  11. "id":{"name":"AVS","value":"756.OUVK.GFAB.51"},
  12. "picture":{"large":"https://randomuser.me/api/portraits/men/69.jpg","medium":"https://randomuser.me/api/portraits/med/men/69.jpg","thumbnail":"https://randomuser.me/api/portraits/thumb/men/69.jpg"},"nat":"CH"}]
Then I created a MySQL table to populate with JSON data.
  1. drop table useraccount; create table useraccount(
  2. gender varchar(200),
  3. title varchar(200),
  4. first varchar(200),
  5. last varchar(200),
  6. street varchar(200),
  7. city varchar(200),
  8. state varchar(200),
  9. postcode varchar(200),
  10. email varchar(200),
  11. username varchar(200),
  12. password varchar(200),
  13. salt varchar(200),
  14. md5 varchar(200),
  15. sha1 varchar(200),
  16. sha256 varchar(200),
  17. dob varchar(200),
  18. registered varchar(200),
  19. phone varchar(200),
  20. cell varchar(200),
  21. name varchar(200),
  22. value varchar(200),
  23. large varchar(200),
  24. medium varchar(200),
  25. thumbnail varchar(200),
  26. nat varchar(200));
I created a Phoenix table ontop of HBase to hold data:
  1. create table useraccount(
  2. gender varchar,
  3. title varchar,
  4. firstname varchar,
  5. lastname varchar,
  6. street varchar,
  7. city varchar,
  8. state varchar,
  9. postcode varchar,
  10. email varchar,
  11. username varchar,
  12. password varchar,
  13. salt varchar,
  14. md5 varchar not null primary key,
  15. sha1 varchar,
  16. sha256 varchar,
  17. dob varchar,
  18. registered varchar,
  19. phone varchar,
  20. cell varchar,
  21. name varchar,
  22. value2 varchar,
  23. large varchar,
  24. medium varchar,
  25. thumbnail varchar,
  26. nat varchar);
Step 1: QueryDatabaseTable
Reads from MySQL tables. This processor just needs the MySQL Connection, table name: useraccount and column: id.
With have two forks from this query table.
Fork 1
Step 2: ConvertAvroToJSON
Use Array
You will get arrays of JSON that look like this:
  1. {
  2. "id" : 656949,
  3. "gender" : "female",
  4. "title" : "madame",
  5. "first" : "amandine",
  6. "last" : "sanchez",
  7. "street" : "8604 place paul-duquaire",
  8. "city" : "savigny",
  9. "state" : "genève",
  10. "postcode" : "5909",
  11. "email" : "amandine.sanchez@example.com",
  12. "username" : "ticklishmeercat183",
  13. "password" : "hillary",
  14. "salt" : "Sgq7HHP1",
  15. "md5" : "d82d6c3524f3a1118399113e6c43ed31",
  16. "sha1" : "23ce2b372f94d39fb949d95e81e82bece1e06a4a",
  17. "sha256" : "49d7e92a2815df1d5fd991ce9ebbbcdffee4e0e7fe398bc32f0331894cae1154",
  18. "dob" : "1983-05-22 15:16:49",
  19. "registered" : "2011-02-06 22:03:37",
  20. "phone" : "(518)-683-8709",
  21. "cell" : "(816)-306-5232",
  22. "name" : "AVS",
  23. "value" : "756.IYWK.GJBH.35",
  24. "large" : "https://randomuser.me/api/portraits/women/50.jpg",
  25. "medium" : "https://randomuser.me/api/portraits/med/women/50.jpg",
  26. "thumbnail" : "https://randomuser.me/api/portraits/thumb/women/50.jpg",
  27. "nat" : "CH"
  28. }
Step 3: SplitJSON
Use: $.* to split all the arrays into individual JSON records.
Step 4: EvaluateJSONPath
You need to pull out each attribute you want and name it, example
cell for $.cell
See the guide to JSONPath with testing tool here.
Step 5: ReplaceText
Here we format the SQL from the attributes we just parsed from JSON:
  1. upsert into useraccount (gender,title,firstname,lastname,street,city,state,postcode,email,
  2. username,password,salt,md5,sha1,sha256,dob,registered,phone,cell,name,value2,large,medium,thumbnail,nat)
  3. values ('${'gender'}','${'title'}','${'first'}','${'last'}','${'street'}','${'city'}','${'state'}','${'postcode'}',
  4. '${'email'}','${'username'}','${'password'}','${'salt'}','${'md5'}','${'sha1'}','${'sha256'}','${'dob'}',
  5. '${'registered'}','${'phone'}','${'cell'}','${'name'}','${'value'}','${'large'}','${'medium'}','${'thumbnail'}','${'nat'}' )
Step 6: PutSQL
With an example Batch Size of 100, we connect to our Phoenix DB Connection Pool.
Fork 2
Step 2: UpdateAttribute
We set orc.table to useraccount
Step 3: ConvertAvroToORC
We set our configuration files for Hive: /etc/hive/conf/hive-site.xml, 64MB stripe, and importantly Hive Table Name to ${orc.table}
Step 4: PutHDFS
Set out configuration /etc/hadoop/conf/core-site.xml and a directory you have access to write to for storing the ORC files.
Step 5: ReplaceText
Search Value: (?s:^.*$)
Replacement Value: ${hive.ddl} LOCATION '${absolute.hdfs.path}'
Always replace and entire text.
Step 6: PutHiveQL
You need to connect to your Hive Connection.
You will see the resulting ORC files in your HDFS directory
  1. [root@tspanndev12 demo]# hdfs dfs -ls /orcdata
  2. Found 2 items
  3. -rw-r--r-- 3 root hdfs 246806 2016-10-29 01:24 /orcdata/2795061363634412.orc
  4. -rw-r--r-- 3 root hdfs 246829 2016-10-29 17:25 /orcdata/2852682816977877.orc
After my first few batches of data are ingested, I check them in Apache Zeppelin. Looks good.
The data has also been loaded into Apache Hive.

Popular posts from this blog

Ingesting Drone Data From DJII Ryze Tello Drones Part 1 - Setup and Practice

Ingesting Drone Data From DJII Ryze Tello Drones Part 1 - Setup and Practice In Part 1, we will setup our drone, our communication environment, capture the data and do initial analysis. We will eventually grab live video stream for object detection, real-time flight control and real-time data ingest of photos, videos and sensor readings. We will have Apache NiFi react to live situations facing the drone and have it issue flight commands via UDP. In this initial section, we will control the drone with Python which can be triggered by NiFi. Apache NiFi will ingest log data that is stored as CSV files on a NiFi node connected to the drone's WiFi. This will eventually move to a dedicated embedded device running MiniFi. This is a small personal drone with less than 13 minutes of flight time per battery. This is not a commercial drone, but gives you an idea of the what you can do with drones. Drone Live Communications for Sensor Readings and Drone Control You must connect t

NiFi on Cloudera Data Platform Upgrade - April 2021

CFM 2.1.1 on CDP 7.1.6 There is a new Cloudera release of Apache NiFi now with SAML support. Apache NiFi 1.13.2.2.1.1.0 Apache NiFi Registry 0.8.0.2.1.1.0 See:    https://blog.cloudera.com/the-new-releases-of-apache-nifi-in-public-cloud-and-private-cloud/ https://docs.cloudera.com/cfm/2.1.1/release-notes/topics/cfm-component-support.html https://docs.cloudera.com/cfm/2.1.1/release-notes/topics/cfm-whats-new.html https://docs.cloudera.com/cfm/2.1.1/upgrade-paths/topics/cfm-upgrade-paths.html   For changes:    https://www.datainmotion.dev/2021/02/new-features-of-apache-nifi-1130.html Get your download on:  https://docs.cloudera.com/cfm/2.1.1/download/topics/cfm-download-locations.html To start researching for the future, take a look at some of the technical preview features around Easy Rules engine and handlers. https://docs.cloudera.com/cfm/2.1.1/release-notes/topics/cfm-technical-preview.html Make sure you use the latest possible JDK 8 as there are some bugs out there.   Use a recent v

Using Apache NiFi in OpenShift and Anywhere Else to Act as Your Global Integration Gateway

Using Apache NiFi in OpenShift and Anywhere Else to Act as Your Global Integration Gateway What does it look like? Where Can I Run This Magic Engine: Private Cloud, Public Cloud, Hybrid Cloud, VM, Bare Metal, Single Node, Laptop, Raspberry Pi or anywhere you have a 1GB of RAM and some CPU is a good place to run a powerful graphical integration and dataflow engine.   You can also run MiNiFi C++ or Java agents if you want it even smaller. Sounds Too Powerful and Expensive: Apache NiFi is Open Source and can be run freely anywhere. For What Use Cases: Microservices, Images, Deep Learning and Machine Learning Models, Structured Data, Unstructured Data, NLP, Sentiment Analysis, Semistructured Data, Hive, Hadoop, MongoDB, ElasticSearch, SOLR, ETL/ELT, MySQL CDC, MySQL Insert/Update/Delete/Query, Hosting Unlimited REST Services, Interactive with Websockets, Ingesting Any REST API, Natively Converting JSON/XML/CSV/TSV/Logs/Avro/Parquet, Excel, PDF, Word Documents, Syslog, Kafka, JMS, MQTT, TCP