0% found this document useful (0 votes)
233 views55 pages

Kafka My Kafka Note v67

kafka note

Uploaded by

abhi garg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
233 views55 pages

Kafka My Kafka Note v67

kafka note

Uploaded by

abhi garg
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 55

Introduction to Apache Kafka

Michael Mühlbeyer

BASLE BERN BRUGG DÜSSELDORF FRANKFURT A.M. FREIBURG I.BR. GENEVA


HAMBURG COPENHAGEN LAUSANNE MUNICH STUTTGART VIENNA ZURICH
Introduction to Apache Kafka

Overview
Architecture
Kafka Connect
Kafka Stream
KSQL
Cluster Installation

2 12.07.18 Introduction to Apache Kafka


Apache Kafka - Overview

Kafka was designed to solve both problems


– Simplifying data pipelines
– Handling streaming data

Created by Linkedin in 2010

Apache open source project since 2012

https://www.quora.com/What-is-the-relation-between-Kafka-the-writer-and-Apache-Kafka-the-
distributed-messaging-system

3 12.07.18 Introduction to Apache Kafka


Apache Kafka - Overview

Kafka decouples data source and


destination systems
– Publish/subscribe architecture

All data sources write their data to the


Kafka cluster

All systems wishing to use the data


read from Kafka

4 12.07.18 Introduction to Apache Kafka


Architecture

5 12.07.18 Introduction to Apache Kafka


Apache Kafka - Architecture

Producer send data to the Producer Producer Producer


cluster
Consumers read data
Kafka Cluster
Brokers are the main Zookeeper
component for storage and Broker 1 Broker 2 Broker 3 Ensemble
messaging

Consumer Consumer Consumer

6 12.07.18 Introduction to Apache Kafka


Apache Kafka - Architecture

Basic unit in Kafka is a message


Producers write messages to
Brokers
Consumers read messages from
Brokers
A message is a key-value pair

7 12.07.18 Introduction to Apache Kafka


Apache Kafka - Architecture

http://kafka.apache.org/documentation.html#messageformat

8 12.07.18 Introduction to Apache Kafka


Apache Kafka - Architecture

Kafka maintains streams of messages in Topics


– Logical representation
– Categorize messages in groups

Developer decides which Topics exist


– Topic is auto created on first use

No limit in number of Topics

One or more Producer can write to the same Topic

9 12.07.18 Introduction to Apache Kafka


Apache Kafka - Architecture
Topics are split into partitions
Partitions are distributed across the Brokers

Broker1
Topic

Consumer
1 2 3
Producer Partition 0

Consumer
1 2 3
Partition 1

10 12.07.18 Introduction to Apache Kafka


Apache Kafka - Architecture mytopic
Broker1
P0
1 2 3
Consumer
P2
1 2 3

mytopic
Broker2
P2 Partition 0
Consumer
1 2 3
Producer
P1
1 2 3

mytopic
Broker3 Consumer
P0
1 2 3

P1
1 2 3

11 12.07.18 Introduction to Apache Kafka


Partition 0
Apache Kafka - Producers

Producers write data as messages to the Kafka cluster

Can be written in any language

A command line tool exists to send messages to the cluster


– For testing, debugging, etc

kafka-console-producer --broker-list localhost:9092 --topic test

12 12.07.18 Introduction to Apache Kafka


Apache Kafka - Producers

Producers use a partitioning strategy to assign each message to a Partition


– Default is hash key of message key

Partitioning is used for


– Load balancing à share load across brokers
– semantic partioning àuser-specified key allows locality-sensitive message
processing

13 12.07.18 Introduction to Apache Kafka


Apache Kafka - Brokers
Receive and store the messages send by Producers
A Kafka cluster has typically 3 or more brokers
– Each can handle hundreds of thousands, or millions, of messages per second

14 12.07.18 Introduction to Apache Kafka


Apache Kafka - Brokers
Messages in a Topic are spread across Partitions in different Brokers

Each Partition is stored on the Broker’s disk as one or more log files
– Do not get messed up with log4j logfiles
Each message in the log is identified by its offset number

Retention policy for messages to manage log file growth


– Per Topic

15 12.07.18 Introduction to Apache Kafka


Apache Kafka – Broker

Partitions can be replicated across multiple Brokers


Fault tolerance if one Broker goes down
– Automatically handled by Kafka

16 12.07.18 Introduction to Apache Kafka


Apache Kafka - Consumer
Consumers pull messages from one or more Topics in the cluster
– Consumer will retrieve messages as they are written to the topic

The Consumer Offset keeps track of the latest message read


– Consumer Offset can be changed (if necessary) to reread messages
The Consumer Offset is stored in a special Kafka Topic

A command-line Consumer tool exists to read messages from the cluster

kafka-console-consumer --zookeeper localhost:2181 --topic test --from-beginning

kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic test

17 12.07.18 Introduction to Apache Kafka


Apache Kafka - Consumers

18 12.07.18 Introduction to Apache Kafka


Apache Kafka - Consumers

Different Consumers could read from the same


Topic
Multiple Consumers can be combined in a
Consumer Group
– Scaling
– Each Consumer uses a subset of partitions

19 12.07.18 Introduction to Apache Kafka


Apache Kafka – Cluster overview

20 12.07.18 Introduction to Apache Kafka


Apache Kafka - Zookeeper

Zookeeper is a centralized service which can be used by distributed applications


– Open source Apache project
– Distributed synchronisation
– Enables highly reliable distributed coordination
– Maintains configuration infos

Typically consists of 3 or 5 servers in a quorum

21 12.07.18 Introduction to Apache Kafka


Apache Kafka - Zookeeper

Kafka Brokers use Zookeeper Producer Producer Producer


for important internal features
– Cluster Management
Kafka Cluster
– Failure detection and recovery Zookeeper
Broker 1 Broker 2 Broker 3 Ensemble
– Access Control List storage

Consumer Consumer Consumer

22 12.07.18 Introduction to Apache Kafka


Apache Kafka – Replication and Durability

Each Partition can have replicas


Replicas will be placed on different Brokers
Replicas are spread evenly among the Brokers

23 12.07.18 Introduction to Apache Kafka


Apache Kafka – Replication and Durability

Distributed partition leaders


– Ideally spreaded evenly across the cluster

24 12.07.18 Introduction to Apache Kafka


Apache Kafka – Replication and Durability
Increase the replication factor for higher durability
For auto-created Topics
– default.replication.factor (Default: 1)
– Configuration on each broker (server.properties )
For manually created topics

kafka-topics --create --zookeeper zk_host:2181 --partitions 2 -


-replication-factor 3 --topic my_topic

25 12.07.18 Introduction to Apache Kafka


Apache Kafka – Replication and Durability
Producers can control acknowledgement setting

Value Latency Durability Description


0 No network delay low Producer doesn‘t wait for leader

1 (default) 1 network roundtrip medium Producer waits for Leader


Leader sends ack
No wait for follower

all(-1) 2 network roundtrips high Producer waits for Leader


Leader sends ack when all In-Sync replicas
have send the ack

26 12.07.18 Introduction to Apache Kafka


Kafka Connect

27 12.07.18 Introduction to Apache Kafka


Kafka Connect

Source: https://www.confluent.io/

28 12.07.18 Introduction to Apache Kafka


Kafka Connect
Framework for streaming data between Kafka and other systems
– Open source
Useful for
– Stream an entire SQL database to Kafka
– Stream Kafka topics into HDFS
Kafka Connect has benefits over „do-it-yourself“ Producers and Consumers
– Tested Connectors

29 12.07.18 Introduction to Apache Kafka


Kafka Connect

30 12.07.18 Introduction to Apache Kafka


Kafka Connect

Source: https://www.confluent.io/product/connectors/

31 12.07.18 Introduction to Apache Kafka


Kafka Streams

32 12.07.18 Introduction to Apache Kafka


Kafka Streams

Powerful easy-to use Java library


Part of open source Apache Kafka
Build your own stream processing applications that are
– Scalable
– Fault-tolerant
– Stateful
– Distributed
– able to handle late-arriving, out-of-order data

33 12.07.18 Introduction to Apache Kafka


Kafka Streams

Source: https://www.confluent.io/

34 12.07.18 Introduction to Apache Kafka


Kafka Streams – Unix

Kafka Connect Kafka Streams Kafka Connect

cat < in.txt | grep "apache“ | tr a-z A-Z > out.txt

Kafka

35 12.07.18 Introduction to Apache Kafka


Kafka Streams

Source: https://www.confluent.io/

36 12.07.18 Introduction to Apache Kafka


Kafka Streams - Topology

Source: https://www.confluent.io/

37 12.07.18 Introduction to Apache Kafka


Kafka Streams – Partition and Tasks

Source: https://www.confluent.io/

38 12.07.18 Introduction to Apache Kafka


Kafka Streams

Reading data from Kafka

Writing data to Kafka

Source: https://www.confluent.io/

39 12.07.18 Introduction to Apache Kafka


Kafka Streams

Source: https://www.confluent.io/

40 12.07.18 Introduction to Apache Kafka


KSQL

41 12.07.18 Introduction to Apache Kafka


KSQL

Open Source

Enables stream processing with zero coding required

The simplest way to process streams of data in real-time

Powered by Kafka: scalable, distributed, battle-tested

All you need is Kafka

42 12.07.18 Introduction to Apache Kafka


KSQL
Realtime Analytics with Kafka and KSQL
Swingbench
Orders Logins

Kafka
Oracle
Kafka

Elasticsearch
Kafka Connect Connect
Txn log
Goldengate

Kafka Streams
API

KSQL

Source: https://www.confluent.io/

43 12.07.18 Introduction to Apache Kafka


KSQL

Declare Table based on Topics

ksql> CREATE TABLE CUSTOMERS


(CUSTOMER_ID INT,
CUST_FIRST_NAME STRING,
CUST_LAST_NAME STRING, CUSTOMER_CLASS STRING)
WITH
(KAFKA_TOPIC=‚my_ora_stream',
VALUE_FORMAT='JSON');

44 12.07.18 Introduction to Apache Kafka


KSQL

Query the live stream of Kafka

ksql> SELECT CUSTOMER_ID,


CUST_FIRST_NAME,
CUST_LAST_NAME,
CUSTOMER_CLASS
FROM CUSTOMERS
LIMIT 3;
75003 | karl | greene | Occasional
75010 | samuel | cook | Prime
75012 | paul | taylor | Occasional

45 12.07.18 Introduction to Apache Kafka


Kafka powered ETL platform

Source: https://www.confluent.io/

46 12.07.18 Introduction to Apache Kafka


Confluent Platform

47 12.07.18 Introduction to Apache Kafka


Confluent
Confluent Platform
Platform: Enterprise Streaming based on Apache Kafka®

Database Changes Log Events loT Data Web Events …

Confluent Platform
Data
 Real-time Applications
Integration
Monitoring & Administration
Confluent Control Center | Security
Confluent Platform
Hadoop Transformations
Operations
Replicator | Auto Data Balancing
Database Custom Apps

Data Compatibility
Schema Registry
Analytics
Data Warehouse

Development and Connectivity


Clients | Connectors | REST Proxy | KSQL | CLI Monitoring
CRM

Apache Kafka®
… …
Core | Connect API | Streams API

Apache Open Source Confluent Open Source Confluent Enterprise

Confidential 68
Source: https://www.confluent.io/

48 12.07.18 Introduction to Apache Kafka


Cluster Installation

49 12.07.18 Introduction to Apache Kafka


Cluster installation

Install current Java JDK

sudo yum install java-1.8.0-openjdk -y

Install Confluent

sudo rpm --import


http://packages.confluent.io/rpm/3.3/archive.key

50 12.07.18 Introduction to Apache Kafka


Cluster installation
Install Confluent Packages

sudo vi /etc/yum.repos.d/confluent.repo
[Confluent.dist]
name=Confluent repository (dist)
baseurl=http://packages.confluent.io/rpm/3.3/7
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/3.3/archive.key
enabled=1

[Confluent]
name=Confluent repository
baseurl=http://packages.confluent.io/rpm/3.3
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/3.3/archive.key
enabled=1

51 12.07.18 Introduction to Apache Kafka


Cluster installation

Install the Packages

sudo yum install confluent-platform-oss-2.11

For a lab environment the above setting should be fine no need to change anything.
Start zookeeper

sudo zookeeper-server-start /etc/kafka/zookeeper.properties

52 12.07.18 Introduction to Apache Kafka


Cluster installation

Start Kafka Broker

sudo kafka-server-start /etc/kafka/server.properties

Cluster is ready to use

sudo kafka-topics --list --zookeeper localhost:2181


_confluent.support.metrics
__consumer_offsets

53 12.07.18 Introduction to Apache Kafka


http://kafka.apache.org/
https://www.confluent.io/
https://www.confluent.io/blog/
https://www.confluent.io/product/ksql/
http://blog.muehlbeyer.net/index.php/apache-kafka-installation-linux/

54 12.07.18 Introduction to Apache Kafka


Michael Mühlbeyer
Senior Consultant
Tel. +49 162 295 96 96
michael.muehlbeyer@trivadis.com

55 12.07.18 Introduction to Apache Kafka

You might also like