Showing posts with label FLANK. Show all posts
Showing posts with label FLANK. Show all posts

FLaNK Stack Weekly for 12 September 2023

12-September-2023

FLiPN-FLaNK Stack Weekly

Tim Spann @PaaSDev

https://www.threads.net/@tspannhw

https://medium.com/@tspann/subscribe

Get your new Apache NiFi for Dummies!

https://www.cloudera.com/campaign/apache-nifi-for-dummies.html

https://ossinsight.io/analyze/tspannhw

Always remember September 11

Josh Long Megatip

Download @graalvm Java 21 release, start a @SpringBoot 3.2+ and @GraalVM-ready project on http://start.spring.io, configure @Java 21, and set spring.threads.virtual.enabled=true, and then enjoy: Loom and @GraalVM native images!

CODE + COMMUNITY

Please join my meetup group NJ/NYC/Philly/Virtual.

http://www.meetup.com/futureofdata-princeton/

https://www.meetup.com/futureofdata-newyork/

https://www.meetup.com/futureofdata-philadelphia/

**This is Issue #102 **

https://github.com/tspannhw/FLiPStackWeekly

https://www.linkedin.com/pulse/schedule-2023-tim-spann-/

https://www.cloudera.com/solutions/dim-developer.html

My latest talk will be streaming on September 13th on NiFi, Kafka, Flink and LLM.

https://www.cloudera.com/about/events/cloudera-now-cdp.html

Flink got added to OSS Chat! https://osschat.io/chat?project=Flink

Releases

NiFi 1.23.2

MiNiFi C++ Agent 1.15 https://nifi.apache.org/minifi/download.html

Recent Talk

https://www.slideshare.net/bunkertor/aidevday-datainmotion-to-supercharge-ai

Articles

https://medium.com/@tspann/streaming-llm-with-apache-nifi-huggingface-ad2f0d367468

https://blog.cloudera.com/how-to-ensure-supply-chain-security-for-ai-applications/

https://www.cloudera.com/about/news-and-blogs/press-releases/09-07-23-cloudera-signs-strategic-collaboration-agreement-with-aws.html

https://www.infoq.com/news/2023/09/java-21-so-far/

https://blog.cloudera.com/how-financial-services-and-insurance-streamline-ai-initiatives-with-a-hybrid-data-platform/

https://medium.com/cloudera-inc/building-an-effective-nifi-flow-b5aa1b816380

https://medium.com/@nifi.notes/building-an-effective-nifi-flow-queryrecord-cca5ba51afd5

https://medium.com/cloudera-inc/building-an-effective-nifi-flow-partitionrecord-b342a8efc50c

https://medium.com/cloudera-inc/building-an-effective-nifi-flow-routetext-5068a3b4efb3

Videos

AICamp — AI Dev Day — NYC 2023 — August 23, 2023 — NiFi + LLM

Events

September 13, 2023: Cloudera Now https://www.cloudera.com/about/events/cloudera-now-cdp.html?internal_keyplay=ALL&internal_campaign=FY24-Q3_AMER_Cloudera_Now_WEB_H10&cid=701Hr0000025VuVIAU&internal_link=h10

September 14, 2023: SkillUpSeries: Enable a Streaming Change Data Capture (CDC) Solution. Virtual. https://attend.cloudera.com/skillupseriesseptember14

Sept 21, 2023: Sao Paulo, Brazil. Evolve https://br.cloudera.com/about/events/evolve/sao-paulo.html

October 7–10, 2023: Halifax, CA. Community over Code. https://communityovercode.org/

October 8, 2023: Streaming Track, Room 102 https://communityovercode.org/schedule/#Oct8 https://communityovercode.org/schedule-list/#SG007 https://communityovercode.org/schedule-list/#SG011

October 10, 2023: Internet of Things Track, Room 109 https://communityovercode.org/schedule/#Oct10 https://communityovercode.org/schedule-list/#IOT001

October 18, 2023: 2-Hours to Data Innovation: Data Flow https://www.cloudera.com/about/events/hands-on-lab-series-2-hours-to-data-innovation.html

November 1, 2023: Open Source Finance Forum. Virtual. https://events.linuxfoundation.org/open-source-finance-forum-new-york/

November 1, 2023 7PM EST: AI Dev World. Hybrid https://aidevworld.com/conference/

November 2, 2023: Evolve. NYC https://www.cloudera.com/about/events/evolve/new-york.html#register

November 7, 2023: XtremeJ 2023. Virtual. https://xtremej.dev/2023/schedule/

November 8, 2023: Flink Forward, Seattle. https://www.flink-forward.org/seattle-2023

November 21, 2023: JCon World. Virtual. https://sched.co/1RRWm

November 22, 2023: Big Data Conference. Hybrid
https://bigdataconference.eu/ https://events.pinetool.ai/3079/#sessions/101077

Cloudera Events https://www.cloudera.com/about/events.html

More Events: https://www.linkedin.com/pulse/schedule-2023-tim-spann-/

Code

Tools

© 2020–2023 Tim Spann

Did the user really ask for Exactly Once? Fault Tolerance

Exactly Once Requirements

It is very tricky and can cause performance degradation, if your user could just use at least once, then always go with that.    Having data sinks like Kudu where you can do an upsert makes exactly once less needed.

https://docs.cloudera.com/csa/1.2.0/datastream-connectors/topics/csa-kafka.html

Apache Flink, Apache NiFi Stateless and Apache Kafka can participate in that.

For CDF Stream Processing and Analytics with Apache Flink 1.10 Streaming:

Both Kafka sources and sinks can be used with exactly once processing guarantees when checkpointing is enabled.


End-to-End Guaranteed Exactly-Once Record Delivery

The Data Source and Data Sink to need to support exactly-once state semantics and take part in checkpointing.


Data Sources
  • Apache Kafka - must have Exactly-Once selected, transactions enabled and correct driver.

Select:  Semantic.EXACTLY_ONCE


Data Sinks
  • HDFS BucketingSink
  • Apache Kafka



Reference


The Rise of the Mega Edge (FLaNK)

At one point edge devices were cheap, low energy and low powered.   They may have some old WiFi and a single core CPU running pretty slow.    Now power, memory, GPUs, custom processors and substantial power has come to the edge.

Sitting on my desk is the NVidia Xaver NX which is the massively powerful machine that can easily be used for edge computing while sporting 8GB of fast RAM, a 384 NVIDIA CUDA® cores and 48 Tensor cores GPU, a 6 core 64-bit ARM CPU and is fast.   This edge device would make a great workstation and is now something that can be affordably deployed in trucks, plants, sensors and other Edge and IoT applications.  


Next that titan device is the inexpensive hobby device, the Raspberry Pi 4 that now sports 8 GB of LPDDR4 RAM, 4 core 64-bit ARM CPU and is speedy!   It can also be augmented with a Google Coral TPU or Intel Movidius 2 Neural Compute Stick.   


These boxes come with fast networking, bluetooth and the modern hardware running in small edge devices that can now deployed en masse.    Enabling edge computing, fast data capture, smart processing and integration with servers and cloud services.    By adding Apache NiFi's subproject MiNiFi C++ and Java agents we can easily integrate these powerful devices into a Streaming Data Pipeline.   We can now build very powerful flows from edge to cloud with Apache NiFi, Apache Flink, Apache Kafka  (FLaNK) and Apache NiFi - MiNiFi.    I can run AI, Deep Learning, Machine Learning including Apache MXNet, DJL, H2O, TensorFlow, Apache OpenNLP and more at any and all parts of my data pipeline.   I can push models to my edge device that now has a powerful GPU/TPU and adequate CPU, networking and RAM to do more than simple classification.    The NVIDIA Jetson Xavier NX will run multiple real-time inference streams at 60 fps on multiple cameras.  

I can run live SQL against these events at every segment of the data pipeline and combine with machine learning, alert checks and flow programming.   It's now easy to build and deploy applications from edge to cloud.

I'll be posting some examples in my next article showing some simple examples.

By next year, 12 or 16 GB of RAM may be a common edge device RAM, perhaps 2 CPUs with 8 cores, multiple GPUs and large fast SSD storage.   My edge swarm may be running much of my computing power as my flows running elastically on public and private cloud scale up and down based on demand in real-time.


Explore Enterprise Apache Flink with Cloudera Streaming Analytics - CSA 1.2


Explore Enterprise Apache Flink with Cloudera Streaming Analytics - CSA 1.2

What's New in Cloudera Streaming Analytics


Try out the tutorials now:   https://github.com/cloudera/flink-tutorials

So let's get our Apache Flink on, as part of my FLaNK Stack series I'll show you some fun things we can do with Apache Flink + Apache Kafka + Apache NiFi.

We will look at some of updates in Apache Flink 1.10 including the SQL Client and API.

We are working with Apache Flink 1.10, Apache NiFi 1.11.4 and Apache Kafka 2.4.1.

The SQL features are strong and we will take a look at what we can do.


Table connectors
  • Kafka
  • Kudu
  • Hive (through catalog)

Data formats (Kafka)
  • JSON
  • Avro
  • CSV



Building a DataStream Application in Flink

Build A Flink Project

mvn archetype:generate                               \
      -DarchetypeGroupId=org.apache.flink              \
      -DarchetypeArtifactId=flink-quickstart-java      \
      -DarchetypeVersion=1.10.0

References:










No More Spaghetti Flows

Spaghetti Flows




You may have heard of:   https://en.wikipedia.org/wiki/Spaghetti_code.   For Apache NiFi, I have seen some (and have done some of them in the past), I call them Spaghetti Flows.


Let's avoid them.   When you are first building a flow it often meanders and has lots of extra steps and extra UpdateAttributes and random routes. This applies if you are running on-premise, in CDP or in other stateful NiFi clusters (or single nodes). The following video from Mark Payne is a must watch before you write any NiFi flows.


Apache NiFi Anti-Patterns with Mark Payne


https://www.youtube.com/watch?v=RjWstt7nRVY

https://www.youtube.com/watch?v=v1CoQk730qs

https://www.youtube.com/watch?v=JbUjYr6Kd3I

https://github.com/tspannhw/EverythingApacheNiFi 



Do Not:

  • Do not Put 1,000 Flows on one workspace.

  • If your flow has hundreds of steps, this is a Flow Smell.   Investigate why.

  • Do not Use ExecuteProcess, ExecuteScripts or a lot of Groovy scripts as a default, look for existing processors

  • Do not Use Random Custom Processors you find that have no documentation or are unknown.

  • Do not forget to upgrade, if you are running anything before Apache NiFi 1.10, upgrade now!

  • Do not run on default 512M RAM.

  • Do not run one node and think you have a highly available cluster.

  • Do not split a file with millions of records to individual records in one shot without checking available space/memory and back pressure.

  • Use Split processors only as an absolute last resort. Many processors are designed to work on FlowFiles that contain many records or many lines of text. Keeping the FlowFiles together instead of splitting them apart can often yield performance that is improved by 1-2 orders of magnitude.


Do:

  • Reduce, Reuse, Recycle.    Use Parameters to reuse common modules.

  • Put flows, reusable chunks (write to Slack, Database, Kafka) into separate Process Groups.

  • Write custom processors if you need new or specialized features

  • Use Cloudera supported NiFi Processors

  • Use RecordProcessors everywhere

  • Read the Docs!

  • Use the NiFi Registry for version control.

  • Use NiFi CLI and DevOps for Migrations.

  • Run a CDP NiFi Datahub or CFM managed 3 or more node cluster.

  • Walk through your flow and make sure you understand every step and it’s easy to read and follow.   Is every processor used?   Are there dead ends?

  • Do run Zookeeper on different nodes from Apache NiFi.

  • For Cloud Hosted Apache NiFi - go with the "high cpu" instances, such as 8 cores, 7 GB ram.

  • same flow 'templatized' and deployed many many times with different params in the same instance

  • Use routing based on content and attributes to allow one flow to handle multiple nearly identical flows is better than deploying the same flow many times with tweaks to parameters in same cluster.

  • Use the correct driver for your database.   There's usually a couple different JDBC drivers.

  • Make sure you match your Hive version to the NiFi processor for it.   There are ones out there for Hive 1 and Hive 3!   HiveStreaming needs Hive3 with ACID, ORC.  https://community.cloudera.com/t5/Support-Questions/how-to-use-puthivestreaming/td-p/108430


Let's revisit some Best Practices:


https://medium.com/@abdelkrim.hadjidj/best-practices-for-using-apache-nifi-in-real-world-projects-3-takeaways-1fe6912101db


Get your Apache NiFi for Dummies.   My own NiFi 101.


Here are a few things you should have read and tried before building your first Apache NiFi flow:

Also when in doubt, use Records!  Use Record Processors and use pre-defined schemas, this will be easier to develop, cleaner and more performant. Easier, Faster, Better!!!


There are record processors for Logs (Grok), JSON, AVRO, XML, CSV, Parquet and more.


Look for a processor that has “Record” in the name like PutDatabaseRecord or QueryRecord.


Use the best DevOps processes, testing and tools.

Some newer features in 1.8, 1.9, 1.10, 1.11 that you need to use.

Advanced Articles:

Spaghetti is for eating, not for real-time data streams.   Let's keep it that way.


If you are not sure what to do check out the Cloudera Community, NiFi Slack or the NiFi docs.   Also I may have a helpful article here. Join me and my NiFi friends at virtual meetups for more in-depth NiFi, Flink, Kafka and more. We keep it interactive so you can feel free to ask questions.


Note:   In this picture I am in Italy doing spaghetti research.