New Release for HDF 3.5.2 and Cloudera DataFlow for Data Hub7.2.6 (Public Cloud)

New Release for HDF 3.5.2 and Cloudera Data Flow for Data Hub 7.2.6 (Public Cloud)


There are a lot of major updates for various Cloudera Flow Management releases.

HDF 3.5.2


This is the final release of HDF.  HDF 3.5.2 includes the following components:
  • Apache Ambari 2.7.5
  • Apache Kafka 2.3.1
  • Apache NiFi 1.12.1
  • NiFi Registry 0.8.0
  • Apache Ranger 1.2.0
  • Apache Storm 1.2.1
  • Apache ZooKeeper 3.4.6
  • Apache MiNiFi Java Agent 0.6.0
  • Apache MiNiFi C++ 0.6.0
  • Hortonworks Schema Registry 0.8.1
  • Hortonworks Streaming Analytics Manager 0.6.0
  • Apache Knox 1.0.0
  • SmartSense 1.5.0

Major Updates

  • Apache NiFi updated to 1.12.1 plus fixes and improvements
  • Apache NiFi Registry updated to 0.8.0  plus fixes and improvements
  • System Level Monitoring History
  • Scripted Transform Record processor
  • ListenFTP 
  • Support for a record writer in the ListX processors
  • Support for Kafka 2.6
  • ADLS Gen2 processors
  • Flow File Concurrency at Process Group Level
  • Hazelcast implementation for the distributed map cache server
  • Support for version 2 and 3 of the schema encoding with the Schema Registry

Public Cloud Release

CFM 2.0.6 is now running in Cloudera Data Platform Public Cloud version 7.2.6.   This version adds Technical Preview support for GCS.


Streams Replication Manager (SRM) is now available in Public Cloud.

SRM can now be provisioned in CDP Public Cloud with Data Hub. The default Streams Messaging cluster definitions are updated to include SRM.    SRM can now be deployed in high availability mode.

Cluster Layout

https://docs.cloudera.com/cdf-datahub/7.2.6/planning-your-streams-messaging-deployment/topics/cdf-datahub-sm-cluster-layout.html



CDP DC Site-to-Site CDP Public Cloud 

https://docs.cloudera.com/cdf-datahub/7.2.6/site-to-site/topics/cdf-datahub-site-to-site.html


Google Cloud Tech Preview For NiFi Data Hub


Cloudera Data Platform - Using Apache NiFi REST API in the Public Cloud

You can grab the end points from the Data Hub End Points tab.


Example NiFi REST API Calls To CDP CDF Public Cloud NiFI Datahub


These type of REST calls will prompt you for your username/password.

Here are some examples:

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/access

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/controller/cluster

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/controller/registry-clients

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/system-diagnostics

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/site-to-site

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/resources

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/tenants/users

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/flow/status

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/flow/process-groups/root/controller-services

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/flow/process-groups/root/status

https://ace-ww-workshop-nifikafka-master0.ace-aw-w.ylcu-atmi.cloudera.site/ace-ww-workshop-nifikafka/cdp-proxy-api/nifi-app/nifi-api/flow/process-groups/root



How to Connect With NiFi CLI


References:




Basic Understanding of Cloudera Flow Management - Apache NiFi

 Basic Understanding of Cloudera Flow Management - Apache NiFi


Topics:

  • NiFi Cluster Architecture
  • Content Repository
  • EncryptedContentRepository and other options
  • Provenance Repository
  • FlowFile Repository
  • FlowFile, Attributes, Process Groups, Connections, Flow Controllers
  • Controller Services
  • Custom Properties
  • Common Attributes (uuid, filename, path, file size, ...)
  • Expression Language
  • Flow Routing
  • Testing and Test Data Generation
  • Relationships
  • Ports
  • Bulletins
  • flow.xml.gz
  • Input Port
  • Output Port
  • Empty Queues
  • Setting Warning Levels
  • Funnels
  • Copy on Write
  • RecordPath 
  • Using Record Processors (Readers/Writers)
  • NiFi Toolkit
  • NiFi CLI
  • NiFi REST API
  • NiFi Registry Integration
  • Handling Errors
  • Parameter Context / Parameters
  • Summary / Cluster / Bulletins
  • Reporting Tasks
  • Sizing NiFi Cluster on # of records * size / amount of time
  • Configuration Files (Changing RAM)
  • Understanding NiFi logs
  • How to add custom processors
  • JVM 
  • Back pressure
  • Prioritized Queues
  • Load Balancing
  • Load Balancing Strategies
  • Prioritization
  • Monitoring a Flow
  • Using Search
  • Using Documentation
  • Classloader
  • Site-to-Site Communication / Remote Process Groups
  • Extensions
  • Scheduling
  • Tailing Files
  • Reading sFTP/FTP Files
  • Wait and Notify
  • RetryFlowFile Pattern
  • NiFi Calcite SQL 
  • Using Jolt
  • Using JsonPath
  • Using Kerberos
  • Using SSL
  • Making REST Calls
  • Receiving REST Calls
  • Working with Websockets
  • Working with TCP/IP, UDP, Sockets
  • Working with Files, Logs, Syslog
  • Producing and Consuming Kafka
  • Working with HDFS
  • Reading/Writing Hive
  • Reading/Writing Impala/Kudu
  • Reading/Writing HBase
  • Integration with Ranger
  • Integration with Knox
  • Integration with Atlas
  • LookupRecord
  • Working with Caches
  • Restarting Flows
  • Pass by Reference
  • Working with XML
  • Working with JSON
  • Working with AVRO
  • Working with Schema Registry
  • Using Regular Expressions
  • Funnels




Must read:


Building SSL for Hosting Mobile Sites on NiFi

Building SSL For Hosting Mobile Web Sites on Apache NiFi



 openssl req -x509 -newkey rsa:2048 -keyout admin-private-key.pem -out admin-cert.pem -days 365 -subj "/CN=Admin Q. User/C=US/L=Seattle" -nodes

openssl pkcs12 -inkey admin-private-key.pem -in admin-cert.pem -export -out admin-q-user.pfx -passout pass:"SuperSecret"

pwd

keytool -genkeypair -alias nifiserver -keyalg RSA -keypass SuperSecret -storepass SuperSecret -keystore server_keystore.jks -dname "CN=Test NiFi Server" -noprompt

keytool -genkeypair -alias nifiserver -keyalg RSA -keypass SuperSecret -storepass SuperSecret -keystore server_keystore.jks -dname "CN=Test NiFi Server" -noprompt

keytool -importcert -v -trustcacerts -alias admin -file admin-cert.pem -keystore server_truststore.jks  -storepass SuperSecret -noprompt


# then import into browser / ssl / key certs





[FLaNK] Smart Weather Websocket Application - Kafka Consumer

 [FLaNK] Smart Weather Websocket Application - Kafka Consumer

Part 2 of 2



This is based on Koji Kawamura's excellent GIST: 

https://gist.github.com/ijokarumawak/60b9ab2038ef906731ebf4c0eee97176

As part of my Smart Weather Application, I wanted to display weather information as it arrives in a webpage using web sockets.   Koji has an excellent NiFi flow that does it.   I tweaked it and add some things since I am not using Zeppelin.   I am hosting my webpage with NiFi as well.

https://www.datainmotion.dev/2020/11/flank-smart-weather-applications-with.html

We simply supply a webpage that makes a websocket connection to NiFi and NiFi keeps a cache in HBase to know what the client is doing.  This cache is updated by consuming from Kafka.   We can then feed events as they happen to the page.





Here is the JavaScript for the web page interface to websockets:

<script>
function sendMessage(type, payload) {
websocket.send(makeMessage(type, payload));
}

function makeMessage(type, payload) {
return JSON.stringify({
'type': type,
'payload': payload
});
}

var wsUri = "ws://edge2ai-1.dim.local:9091/test";

websocket = new WebSocket(wsUri);
websocket.onopen = function(evt) {

sendMessage('publish', {
"message": document.getElementById("kafkamessage")
});

};
websocket.onerror = function(evt) {console.log('ERR', evt)};
websocket.onmessage = function(evt) {
var dataPoints = JSON.parse(evt.data);

var output = document.getElementById("results");
var dataBuffer = "<p>";
for(var i=0;i<dataPoints.length;i++)
{
dataBuffer += " <img src=\"" + dataPoints[i].icon_url_base + dataPoints[i].icon_url_name + "\"> &nbsp;" + dataPoints[i].location +
dataPoints[i].station_id + "@" + dataPoints[i].latitude + ":" +
dataPoints[i].longitude + "@" + dataPoints[i].observation_time +
dataPoints[i].temperature_string + "," + dataPoints[i].relative_humidity + "," +
dataPoints[i].wind_string +"<br>";

}
output.innerHTML = output.innerHTML + dataBuffer + "</p><br>";
};

</script>




Video Walkthrough:   https://www.twitch.tv/videos/797412192?es_id=bbacb7cb39

Source Code:   https://github.com/tspannhw/SmartWeather/tree/main



Kafka Topic

weathernj Schema

The schema registry has a live Swagger interface to it's REST API




NiFi Flow Overview


Ingest Via REST All US Weather Data from Zipped XML



As Data Streamings In, We Can Govern It





Ingested Data is Validated Against It's Schema Then Pushed to Kafka as Avro


We consume that Kafka data in store it in Kudu for analytics



We host a web page for our Websockets Application in NiFi with 4 simple processors.



Listen and Put Web Socket Messages Between NiFi Server and Web Application



Kafka Data is Cached for Websocket Applications


Set the Port for WebSockets via Jetty Web Server


Use HBase As Our Cache




We can monitor our Flink SQL application from the Global Flink Dashboard






We can query our Weather data store in Apache Kudu via Apache Impala through Apache Hue 




Kudu Visualizations of Our Weather Data in Cloudera Visual Applications

[FLaNK] Smart Weather Applications with Flink SQL

 [FLaNK] Smart Weather Applications with Flink SQL 


Sometimes you want to acquire, route, transform, live query and analyze all the weather data in the United States while those reports happen.   With FLaNK, it's a trivial process to do.





From Kafka to Kudu for Any Schema of Any Type of Data, No Code, Two Steps


The Schema Registry has full Swagger-ized Runnable REST API Documentation.   Integrate, DevOps and Migration in a simple script


Here's your schemas, upload, edit and compare.


Validating Data Against a Schema With Your Approved Level of Tolerance.   You want extra fields allowed, you got it.

nifi

Feed that data to beautiful visual applications running in Cloudera Machine Learning.

You like drill down maps, you got them.


Query your data fast with Apache Hue against Apache Kudu tables through Apache Impala.



Let's ingest all the US weather stations even though they are a zipped directory of a ton of XML files.



Weather Ingest is Easy Automagically


View All Your Topic Data Enabled by Schema Registry Even in Avro Format




Reference:

https://www.datainmotion.dev/2020/07/ingesting-all-weather-data-with-apache.html


Source:

Build

https://github.com/tspannhw/ApacheConAtHome2020/blob/main/scripts/setup.sh

Query

https://github.com/tspannhw/ApacheConAtHome2020/blob/main/scripts/flink.sh


SQL

INSERT INTO weathernj
SELECT `location`, station_id,latitude,longitude,observation_time,weather,
temperature_string, temp_f,temp_c,relative_humidity,wind_string,wind_dir,wind_degrees,wind_mph,
wind_kt, pressure_in,dewpoint_string,dewpoint_f,dewpoint_c
FROM weather
WHERE
`location` is not null and `location` <> 'null' and trim(`location`) <> '' and `location` like '%NJ';

Kafka Insert

https://github.com/tspannhw/ApacheConAtHome2020/blob/main/flinksql/weathernj.sql

Schemas

https://github.com/tspannhw/ApacheConAtHome2020/blob/main/schemas/weathernj.avsc

https://github.com/tspannhw/ApacheConAtHome2020/blob/main/schemas/weather.avsc

Example Slack Output

=========================================================
http://forecast.weather.gov/images/wtf/small/ovc.pngLocation Cincinnati/Northern Kentucky International Airport, KY Station KCVG
Temperature: 49.0 F (9.4 C)
Humdity: 83
Wind East at 3.5 MPH (3 KT)
Overcast
Dewpoint 44.1 F (6.7 C)Observed at Tue, 27 Oct 2020 11:52:00 -0400---- tracking info ----          UUID: 2cb6bd67-148c-497d-badf-dfffb4906b89
  Kafka offset: 0
Kafka Timestamp: 1603818351260
=========================================================

[FLaNK] Streaming EdgeAI on the new NVIDIA Jetson Nano 2GB with MiNiFi Agents To FLaNK Applications

 [FLaNK] Streaming EdgeAI on the new NVIDIA Jetson Nano 2GB with MiNiFi Agents To FLaNK Applications

Plug Into Community AI Apps:  https://youtu.be/2T8CG7lDkcU

I am not patient enough to shoot an unboxing video, I was too excited to get this superb machine running.   The NVIDIA Jetson Nano 2GB is now available for purchase for only $59!!!


The 2GB version of NVIDIA Jetson Nano is great, you really don't miss anything that was removed.   I have copied over my MiNiFi agent and code from other Jetson Nanos, Xavier NX and TX1 and it all works fine.   The speed is fine for most needs especially for development and prototyping.   I prefer the Xavier, but at this price you can't go wrong.   I am definitely going to be getting Jetson Nanos instead of other devices for most IoT / Edge AI use cases.   I have used my NVidia Jetson 2GB for demos for a number of events including ApacheCon, BeamSummit, Open Source Summit and AI Dev World.


I installed the fswebcam to capture still images and build up a directory of them to process.


You must install and run:   https://github.com/dusty-nv/jetson-inference.   You get great libraries, tutorials, documentation and examples.   I usually build my apps starting from one of these examples and use one of the excellent NVIDIA pre-built models.   This rapidly accelerates my development and deployment of EdgeAI applications whether they are IoT or other purposes.   This is working with standard Raspberry Pi plug in cameras and the excellent Logitech USB web cameras that I have used with all my other NVIDIA devices.

At this price point, there seems no reason that every developer in every company should have one.   It's a great place to test out Edge AI applications and run classifications at a decent speed.   This is a real machine despite it.

I was facilitating data journeys at the NetHope Global Summit today and I thought these $59 devices could be great for non-profits to use for many data collection and analytics purposes in the field.  https://www.nethopeglobalsummit.org/agenda-2020#sz-tab-44134 I am exploring some use cases to see if I can pre build some easy applications that an NGO could just pick up and run with.   Let's see what develops.   A $59 GPU edge device enables some new applications at an affordable cost.   $59 won't give me a lot of cloud, but I can get a powerful small data collection device that runs ML, DL, cameras, MiNiFi Agents, Python and Java.   With 2 Gigabytes of fast RAM and a GPU, one is limited by their imagination.


Example Application



Output Exampl Datae:

{"uuid": "nano_uuid_cmq_20201026202757", "ipaddress": "192.168.1.169", "networktime": 47.7275505065918, "detectleft": 1.96746826171875, "detectconfidence": 52.86550521850586, "cputemp": "34.0", "gputemp": "30.0", "gputempf": "86", "cputempf": "93", "runtime": "169", "host": "nano5", "filename": "/opt/demo/images/out_iue_20201026202757.jpg", "host_name": "nano5", "macaddress": "00:e0:4c:49:d8:b7", "end": "1603744246.924455", "te": "169.4200084209442", "systemtime": "10/26/2020 16:30:46", "cpu": 9.9, "diskusage": "37100.4 MB", "memory": 91.5, "id": "20201026202757_64d69a82-88d8-45f8-be06-1b836cb6cc84"}


Below is some example output for running a Python script to classify a webcamera (a low end Logi webcam, but you can use a Raspberry Pi camera).   We would be best served by running this continuously outputting log messages and images for MiNiFi agents to scoop up and send to a server for routing, transformation and processing.


root@nano5:/opt/demo/minifi-jetson-nano# jetson_clocks 
root@nano5:/opt/demo/minifi-jetson-nano# python3 detect.py 
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- found v4l2 device: HD Webcam C615
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HD\ Webcam\ C615", v4l2.device.bus_info=(string)usb-70090000.xusb-3.2, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 30 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction)5/1;
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)1600, height=(int)896, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/2, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 10/1, 15/2, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [6] video/x-raw, format=(string)YUY2, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [7] video/x-raw, format=(string)YUY2, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [8] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [9] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [10] video/x-raw, format=(string)YUY2, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [11] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [12] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [13] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [14] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [15] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [16] image/jpeg, width=(int)1600, height=(int)896, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [17] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [18] image/jpeg, width=(int)960, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [19] image/jpeg, width=(int)1024, height=(int)576, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [20] image/jpeg, width=(int)800, height=(int)600, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [21] image/jpeg, width=(int)864, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [22] image/jpeg, width=(int)800, height=(int)448, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [23] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [24] image/jpeg, width=(int)640, height=(int)360, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [25] image/jpeg, width=(int)432, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [26] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [27] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [28] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] [29] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 24/1, 20/1, 15/1, 10/1, 15/2, 5/1 };
[gstreamer] gstCamera -- selected device profile:  codec=mjpeg format=unknown width=1280 height=720
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> jpegdec0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> jpegdec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> jpegdec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera -- map buffer size was less than max size (1382400 vs 1382407)
[gstreamer] gstCamera recieve caps:  video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1
[gstreamer] gstCamera -- recieved first frame, codec=mjpeg format=i420 width=1280 height=720 size=1382407
RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (14745600 bytes each, 58982400 bytes total)
jetson.inference -- detectNet loading build-in network 'ssd-mobilenet-v2'

detectNet -- loading detection network model from:
          -- model        networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
          -- input_blob   'Input'
          -- output_blob  'NMS'
          -- output_count 'NMS_1'
          -- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 7.1.3
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - UFF  (extension '.uff')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
[TRT]    loading network plan from engine cache... /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
[TRT]    device GPU, loaded /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    Deserialize required 2384046 microseconds.
[TRT]    
[TRT]    CUDA engine context initialized on device GPU:
[TRT]       -- layers       117
[TRT]       -- maxBatchSize 1
[TRT]       -- workspace    0
[TRT]       -- deviceMemory 35449344
[TRT]       -- bindings     3
[TRT]       binding 0
                -- index   0
                -- name    'Input'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  3
                -- dim #0  3 (SPATIAL)
                -- dim #1  300 (SPATIAL)
                -- dim #2  300 (SPATIAL)
[TRT]       binding 1
                -- index   1
                -- name    'NMS'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  100 (SPATIAL)
                -- dim #2  7 (SPATIAL)
[TRT]       binding 2
                -- index   2
                -- name    'NMS_1'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  1 (SPATIAL)
                -- dim #2  1 (SPATIAL)
[TRT]    
[TRT]    binding to input 0 Input  binding index:  0
[TRT]    binding to input 0 Input  dims (b=1 c=3 h=300 w=300) size=1080000
[TRT]    binding to output 0 NMS  binding index:  1
[TRT]    binding to output 0 NMS  dims (b=1 c=1 h=100 w=7) size=2800
[TRT]    binding to output 1 NMS_1  binding index:  2
[TRT]    binding to output 1 NMS_1  dims (b=1 c=1 h=1 w=1) size=4
[TRT]    
[TRT]    device GPU, /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT]    W = 7  H = 100  C = 1
[TRT]    detectNet -- maximum bounding boxes:  100
[TRT]    detectNet -- loaded 91 class info entries
[TRT]    detectNet -- number of object classes:  91
detected 0 objects in image

[TRT]    ------------------------------------------------
[TRT]    Timing Report /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.07802ms  CUDA   0.48875ms
[TRT]    Network       CPU  45.52254ms  CUDA  44.93750ms
[TRT]    Post-Process  CPU   0.03193ms  CUDA   0.03177ms
[TRT]    Total         CPU  45.63248ms  CUDA  45.45802ms
[TRT]    ------------------------------------------------

[TRT]    note -- when processing a single image, run 'sudo jetson_clocks' before
                to disable DVFS for more accurate profiling/timing measurements

[image] saved '/opt/demo/images/out_kfy_20201030195943.jpg'  (1280x720, 4 channels)

[TRT]    ------------------------------------------------
[TRT]    Timing Report /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.07802ms  CUDA   0.48875ms
[TRT]    Network       CPU  45.52254ms  CUDA  44.93750ms
[TRT]    Post-Process  CPU   0.03193ms  CUDA   0.03177ms
[TRT]    Total         CPU  45.63248ms  CUDA  45.45802ms
[TRT]    ------------------------------------------------

[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL
[gstreamer] gstCamera -- pipeline stopped

We are using the enhanced example script, detect.py.   To capture a webc amera image and classify:   camera = jetson.utils.gstCamera(width, height, camera)

This is plenty fast and gives us the results and data we want.





References: