Unboxing the Most Amazing Edge AI Device Part 1 of 3 - NVIDIA Jetson Xavier NX

Unboxing the Most Amazing Edge AI Device 

Fast, Intuitive, Powerful and Easy.
Part 1 of 3
NVIDIA Jetson Xavier NX


This is the first of a series on articles on using the Jetson Xavier NX Developer kit for EdgeAI applications.   This will include running various TensorFlow, Pytorch, MXNet and other frameworks.  I will also show how to use this amazing device with Apache projects including the FLaNK Stack of Apache Flink, Apache Kafka, Apache NiFi, Apache MXNet and Apache NiFi - MiNiFi.

These are not words that one would usually use to define AI, Deep Learning, IoT or Edge Devices.    They are now.    There is a new tool for making what was incredibly slow and difficult to something that you can easily get your hands on and develop with.  Supporting running multiple models simultaneously in containers with fast frame rates is not something I thought you could affordably run in robots and IoT devices.    Now it is and this will drive some amazingly smart robots, drones, self-driving machines and applications that are not yet in prototypes.

Out of the box, this machine is sleek, light weight and ready to go.   And now with built-in fast WiFi, yet another great upgrade!   I added a 256GB SSD Hard drive and it took seconds and a few quick Linux commands.   It's running Ubuntu 18.04 LTS which supports all the deep learning and python libraries you need and runs well.     It has a powerful fan already attached and judging by the fast spinning when I was running benchmarks it probably needs it.   










It was super easy to get working, just plugged in a USB mouse and keyboard and HDMI monitor. 

I ran the benchmarks and was massively impressed with the FPS that can be processed.   This machine has some serious power.  Basically, this device you are going to locate at the edge in a robot, drone, car or other edge point could be your desktop machine.

I ran a few graphics demos and tests to validate everything once my keyboard, mouse and HDMI monitor were connected.   The abilities are awesome.   I can see why NVIDIA GPUs are amazing for gaming.   


The specifications for the edge device are very impressive.   The 8GB of RAM makes this feel like a powerful desktop and not a low powered edge device.  




I ran the benchmarks and they were smoking fast.   I can see using this as a workstation as the FPS were nice as you can see below.




In part 2, I am going to show how to run some edge AI workloads at tremendous speed and stream the results and images to your cloud or big data environments using Apache open source frameworks including Apache Flink, Apache NiFi - MiNiFi and Apache Kafka.

In part 3, We will push the processing capabilities and amp up the workloads and test all the impressive features of this new killer edge device.

There is so many great tutorials and learning materials available for the NVIDIA Xavier NX.     I have found that all my work for Jetson Nano has been working here, only faster.  So this is great, I'll have a few interesting demos and run throughs and a video in the follow up articles.   

I added a standard USB hub and a Logitech C270 USB Web Camera which worked perfectly.   I will use that in the follow up articles and some edge applications.

Tutorials and Guides

References:

I highly recommend all AI, Deep Learning, IoT, IIoT, Edge and streaming developers obtain one or more of these developer kits.

This is a powerful machine in a small box.   From edge applications to robotics to smart devices to anything that needs powerful processing at the edge, this is your device.  A fast CPU, fast GPU and all the interfaces you need.  This should be part of any project.   Joining my NVIDIA Jetson Nano you now have some great affordable options for Edge AI applications.   It is amazing to test drive the performance of this device.   I will also be showing this at my online meetups, so join me or watch the video on Youtube later.

===

Jetson Xavier NX Developer Kit features:
 
Power:  10W (Max efficiency) | 15W (Max performance)
NVIDIA Volta architecture with 384 NVIDIA CUDA® cores and 48 Tensor cores
6-core NVIDIA Carmel ARM®v8.2 64-bit CPU 6 MB L2 + 4 MB L3
2x NVDLA Engines
8 GB 128-bit LPDDR4x @ 51.2GB/s
2x 4K @ 30 | 6x 1080p @ 60 | 14x 1080p @ 30 (H.265/H.264)
Gigabit Ethernet, M.2 Key E (WiFi/BT included), M.2 Key M (NVMe)
HDMI 
4x USB 3.1, USB 2.0 Micro-B
2x 4K @ 60 !  If you lower the resolution, it scales up the numbers.

The Jetson Xavier NX Developer Kit is now available for $399 US at NVIDIA.com and from channel partners worldwide.    I would recommend acquiring some ASAP before current supplies wane and you may have to wait.




Discord Integration with Apache NiFi

Discord Integration with Apache NiFi



For Slack, it's easy and built-in but sometimes as I was reminded you may want to send messages elsewhere.   Thanks to Brian Stitt for the starting information.

I created a free Discord Server in a few seconds and was ready to send messages.


You then copy the webhook URL and you are ready to call.   It's a long SSL link, just past that into the InvokeHTTP processor.

In our flow we need to create an SSL Context for NiFi.   If you are using NiFi in a CDP DataHub there's one for you already configured!

The output will not have a body but there will be cool headers if you want to save them or use them for processing.   Otherwise wrap this in a process group and you have an alert system of your own.


Before we call the InvokeHTTP we need to set the username of the bot.


Your message must be encoded as JSON like so:

{
"content":  "This is my message"
}

Let's call the hook with our data every field is important.

HTTP Method:   POST
Remote URL:  https://discord.com ... our copied webhooks link
SSL Context:  Create or use a StandardSSLContextService
Date Header:  False
Follow Redirecrs:  True
Use Digest: False
Always Output Response:  True
Add Response Headers to Request:  False
Content-Type:   application/json
Send Message Body: true
Chunk Encoding:  False
Ignore responses content: false




Here are the results:

We can see the output of the call to Discord Webhook HTTPS REST Endpoint:




We can see our bot, Spidey Bot, has posted some exciting test messages from NiFi.

For Source Code:

References:


No More Spaghetti Flows

Spaghetti Flows




You may have heard of:   https://en.wikipedia.org/wiki/Spaghetti_code.   For Apache NiFi, I have seen some (and have done some of them in the past), I call them Spaghetti Flows.


Let's avoid them.   When you are first building a flow it often meanders and has lots of extra steps and extra UpdateAttributes and random routes. This applies if you are running on-premise, in CDP or in other stateful NiFi clusters (or single nodes). The following video from Mark Payne is a must watch before you write any NiFi flows.


Apache NiFi Anti-Patterns with Mark Payne


https://www.youtube.com/watch?v=RjWstt7nRVY

https://www.youtube.com/watch?v=v1CoQk730qs

https://www.youtube.com/watch?v=JbUjYr6Kd3I

https://github.com/tspannhw/EverythingApacheNiFi 



Do Not:

  • Do not Put 1,000 Flows on one workspace.

  • If your flow has hundreds of steps, this is a Flow Smell.   Investigate why.

  • Do not Use ExecuteProcess, ExecuteScripts or a lot of Groovy scripts as a default, look for existing processors

  • Do not Use Random Custom Processors you find that have no documentation or are unknown.

  • Do not forget to upgrade, if you are running anything before Apache NiFi 1.10, upgrade now!

  • Do not run on default 512M RAM.

  • Do not run one node and think you have a highly available cluster.

  • Do not split a file with millions of records to individual records in one shot without checking available space/memory and back pressure.

  • Use Split processors only as an absolute last resort. Many processors are designed to work on FlowFiles that contain many records or many lines of text. Keeping the FlowFiles together instead of splitting them apart can often yield performance that is improved by 1-2 orders of magnitude.


Do:

  • Reduce, Reuse, Recycle.    Use Parameters to reuse common modules.

  • Put flows, reusable chunks (write to Slack, Database, Kafka) into separate Process Groups.

  • Write custom processors if you need new or specialized features

  • Use Cloudera supported NiFi Processors

  • Use RecordProcessors everywhere

  • Read the Docs!

  • Use the NiFi Registry for version control.

  • Use NiFi CLI and DevOps for Migrations.

  • Run a CDP NiFi Datahub or CFM managed 3 or more node cluster.

  • Walk through your flow and make sure you understand every step and it’s easy to read and follow.   Is every processor used?   Are there dead ends?

  • Do run Zookeeper on different nodes from Apache NiFi.

  • For Cloud Hosted Apache NiFi - go with the "high cpu" instances, such as 8 cores, 7 GB ram.

  • same flow 'templatized' and deployed many many times with different params in the same instance

  • Use routing based on content and attributes to allow one flow to handle multiple nearly identical flows is better than deploying the same flow many times with tweaks to parameters in same cluster.

  • Use the correct driver for your database.   There's usually a couple different JDBC drivers.

  • Make sure you match your Hive version to the NiFi processor for it.   There are ones out there for Hive 1 and Hive 3!   HiveStreaming needs Hive3 with ACID, ORC.  https://community.cloudera.com/t5/Support-Questions/how-to-use-puthivestreaming/td-p/108430


Let's revisit some Best Practices:


https://medium.com/@abdelkrim.hadjidj/best-practices-for-using-apache-nifi-in-real-world-projects-3-takeaways-1fe6912101db


Get your Apache NiFi for Dummies.   My own NiFi 101.


Here are a few things you should have read and tried before building your first Apache NiFi flow:

Also when in doubt, use Records!  Use Record Processors and use pre-defined schemas, this will be easier to develop, cleaner and more performant. Easier, Faster, Better!!!


There are record processors for Logs (Grok), JSON, AVRO, XML, CSV, Parquet and more.


Look for a processor that has “Record” in the name like PutDatabaseRecord or QueryRecord.


Use the best DevOps processes, testing and tools.

Some newer features in 1.8, 1.9, 1.10, 1.11 that you need to use.

Advanced Articles:

Spaghetti is for eating, not for real-time data streams.   Let's keep it that way.


If you are not sure what to do check out the Cloudera Community, NiFi Slack or the NiFi docs.   Also I may have a helpful article here. Join me and my NiFi friends at virtual meetups for more in-depth NiFi, Flink, Kafka and more. We keep it interactive so you can feel free to ask questions.


Note:   In this picture I am in Italy doing spaghetti research.


Commonly Used TCP/IP Ports in Streaming

Cloudera CDF and HDF Ports
NiFi and Friends
FLaNK Extended Stack


Note: 

All of these ports can be changed by administrators or in version updates.   Also if you are running Apache Knox like in Cloudera Data Platform Public Cloud, these ports may be changed or hidden.   This is just based on a version of CDF I am running and defaults in.   This does not include standard Cloudera ports for Cloudera Manager, Hadoop, Atlas, Ranger and other necessary and fun services.


Cloudera Flow Management (CFM Powered by Apache NiFi)
  • Cloudera NiFi HTTP:    8080 or 9090
  • Cloudera NiFi HTTPS:  8443 or 9443
  • Cloudera NiFi RIP Socket: 10443 or 50999
  • Cloudera NiFi Node Protocol: 11443
  • Cloudera NiFi Load Balancing:  6342
  • Cloudera NiFi Registry: 18080
  • Cloudera NiFi Registry SSL: 18433
  • Cloudera NiFi Certificate Authority:  10443

Cloudera Edge Flow Management (CEM Powered by Apache NiFi - MiNiFi)

  • Cloudera EFM HTTP:  10080
  • Cloudera EFM CoAP:  8989

Cloudera Stream Processing (CSP Powered by Apache Kafka)
  • Cloudera Kafka: 9092
  • Cloudera Kafka SSL:  9093
  • Cloudera Kafka Connect:  38083
  • Cloudera Kafka Connect SSL:  38085
  • Cloudera Kafka Jetty Metrics: 38084
  • Cloudera Kafka JMX: 9393
  • Cloudera Kafka MirrorMaker JMX: 9394
  • Cloudera Kafka HTTP Metric: 24042
  • Cloudera Schema Registry Registry: 7788
  • Cloudera Schema Registry Admin: 7789
  • Cloudera Schema Registry SSL:  7790
  • Cloudera Schema Registry Admin SSL:  7791
  • Cloudera Schema Registry Database (Postgresql):  5432
  • Cloudera SRM:  6669
  • Cloudera RPC: 8081
  • Cloudera SRM Rest: 6670
  • Cloudera SRM Rest SSL:  6671
  • Cloudera SMM Rest / UI: 9991
  • Cloudera SMM Manager:  8585
  • Cloudera SMM Manager SSL:  8587
  • Cloudera SMM Manager Admin:  8586
  • Cloudera SMM Manager Admin SSL: 8588
  • Cloudera SMM Service Monitor:  9997
  • Cloudera SMM Kafka Connect:  38083
  • Cloudera SMM Database (Postgresql):  5432

Cloudera Streaming Analytics (CSA Powered by Apache Flink)
  • Cloudera Flink Dashboard:  8082



References



Cloudera Edge Management 1.1.0 Release

Let's Query Kafka with Hive

Let's Query Kafka with Hive


I can hop into beeline and build an external Hive table to access my Cloudera CDF Kafka cluster whether it is in the public cloud in CDP DataHub, on-premise in HDF or CDF or in CDP-DC.

I just have to set my KafkaStorageHandler, Kafka Topic Name and my bootstrap servers (usually port 9092).   Now I can use that table to do ELT/ELT for populating Hive tables or populating Kafka topics from Hive tables.   This is a nice and easy way to do data engineering on the quick and easy.

This is a good item to augment CDP Data Engineering with Spark, CDP DataHub with NiFi, CDP DataHub with Kafka and KafkaStreams and various SQOOP or Python utilities you may have in your environment.

For real-time continuous queries on Kafka with SQL, you can use Flink SQL.  https://www.datainmotion.dev/2020/05/flank-low-code-streaming-populating.html



Example Table Create

CREATE EXTERNAL TABLE <tableName>
  (`uuid` STRING, `systemtime` STRING , `temperaturef` STRING , `pressure` DOUBLE,`humidity` DOUBLE, `lux` DOUBLE, `proximity` int, `oxidising` DOUBLE , `reducing` DOUBLE, `nh3` DOUBLE , `gasko` STRING,`current` INT, `voltage` INT ,`power` INT, `total` INT,`fanstatus` STRING)
  STORED BY 'org.apache.hadoop.hive.kafka.KafkaStorageHandler'
  TBLPROPERTIES
  ("kafka.topic" = "<TopicName>", 
  "kafka.bootstrap.servers"="<ServerName>:9092");

show tables;

describe extended kafka_table;

select *
from kafka_table;

I can browse my Kafka topics with Cloudera SMM to see what the data is and why I want to load or need to load.



For more information take a look at the documentation for Integrating Hive and Kafka at Cloudera below: