KAFKAExample2
KAFKAExample2
If you want to know the steps how to make Kafka in Action in your
Spring Boot Project, You need to stay here in this blog for a couple of
minutes. You will get step by step information in this article itself.
Let’s start discussing our topic ‘How to work with Apache Kafka in
Spring Boot?’ and the related concepts.
3) If the message is very big /large in size, then MOM (only one
Message Broker Software) behaves very slow.
4) In case of multiple Producer and Consumers, it doesn’t support
scaling. Hence, we can’t create multiple MOM instances.
♦ * JMS is the best for the smaller scale applications such as in case
of less number of producers and less number of consumers.
1) The Admin API to manage and inspect topics, brokers, and other
Kafka objects.
2) The Producer API to publish (write) a stream of events to one or
more Kafka topics.
3) The Consumer API to subscribe to (read) one or more topics and
to process the stream of events produced to them.
4) The Kafka Streams API to implement stream processing
applications and microservices. It provides higher-level functions to
process event streams, including transformations, stateful
operations like aggregations and joins, windowing, processing based
on event-time, and more. Input is read from one or more topics in
order to generate output to one or more topics, effectively
transforming the input streams to output streams.
5) The Kafka Connect API to build and run reusable data
import/export connectors that consume (read) or produce (write)
streams of events from and to external systems and applications so
they can integrate with Kafka. For example, a connector to a
relational database like PostgreSQL might capture every change to a
set of tables. However, in practice, you typically don’t need to
implement your own connectors because the Kafka community
already provides hundreds of ready-to-use connectors.
Why Kafka is used?
1) However, Kafka is implemented in the Java Language but it
supports integration with different technologies and concepts like
Spark, Scala, Hadoop, BigData, etc.
Producer
A Kafka producer acts as a data source that writes, optimizes, and
publishes messages to one or more Kafka topics. Kafka producers
also serialize, compress, and load balance data among brokers
through partitioning.
Topic
A Kafka topic represents a channel through which data is streamed.
Furthermore, Producers publish messages to topics, and consumers
read messages from the topic they subscribe to. Moreover, Topics
organize and structure messages. Its like particular types of
messages will be published to particular topics. They are defined by
unique names within a Kafka cluster. However, there is no limit on
the number of topics that can be created.
Brokers
In fact, Brokers are the software components that run on a
node. Many people in the industry define a Kafka broker as a server
running in a Kafka cluster. In other words, a Kafka cluster consists of
a number of brokers. Typically, multiple brokers form the Kafka
cluster and achieve load balancing and reliable redundancy and
failover. Brokers use Apache ZooKeeper for the management and
coordination of the cluster. Each broker instance is capable of
handling read and write quantities. Each broker has a unique ID and
is responsible for partitions of one or more topic logs.
Consumer
Kafka Consumers read messages from the topics to which they
subscribe. Consumers will belong to a consumer group. Each
consumer within a particular consumer group will have responsibility
for reading a subset of the partitions of each topic that it is
subscribed to.
*** Data in the Kafka cluster is distributed amongst several brokers.
There are several copies of the same data in the Kafka cluster. They
are called replicas. This mechanism makes Kafka even more
reliable, fault-tolerant, and stable. If an error occurs with one broker,
the another broker will start to perform the functions of the broken
component. Hence, there are no chances of any information loss.
What is Zookeeper?
Like Kafka, ZooKeeper is also an open source tool provided by the
Apache Software Foundation. It provides a centralized service in
distributed systems such as providing configuration information,
synchronization, naming registry, and other group services over
large clusters. Kafka uses Zookeeper in order to track the status of
nodes in the Kafka cluster.
What is the role of Zookeeper in
Kafka?
While working with any distributed system, there should be a way to
coordinate tasks. In our context, Kafka is a distributed system that
uses ZooKeeper to co-ordinate its tasks. However, there are some
other technologies like Elasticsearch and MongoDB who have their
own built-in mechanisms for coordinating tasks.
1) When working with Apache Kafka, the primary role
of ZooKeeper is to track the status of nodes in the Kafka cluster
and also maintain a list of Kafka topics and messages.
…extracted to kafka_2.12-2.6.0.tar
2) Again extract
3) Copy this folder to a drive of your system like ‘C:/’ or ‘D:/’ drive.
package com.dev.spring.kafka;
import org.springframework.boot.SpringApplication;
import
org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.kafka.annotation.EnableKafka;
@SpringBootApplication
@EnableKafka
public class SpringBoot2ApacheKafkaTestApplication {
SpringApplication.run(SpringBoot2ApacheKafkaTestApplication.class
, args);
}
}
Step#3: Create a custom
MessageRepository class
Here, we will create a custom MessageRepository class.
Furthermore, we will create a List and add each incoming message
in that List. Moreover, we will create two methods; one to add a
message, and another to retrieve all messages. For example , below
code demonstrates the concept.
package com.dev.spring.kafka.message.repository;
import java.util.ArrayList;
import java.util.List;
import org.springframework.stereotype.Component;
@Component
public class MessageRepository {
package com.dev.spring.kafka.sender;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;
@Component
public class MessageProducer {
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
@Value("${myapp.kafka.topic}")
private String topic;
package com.dev.spring.kafka.consumer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
import
com.dev.spring.kafka.message.repository.MessageRepository;
@Component
public class MessageConsumer {
@Autowired
private MessageRepository messageRepo;
package com.dev.spring.kafka.controller;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
import
com.dev.spring.kafka.message.repository.MessageRepository;
import com.dev.spring.kafka.sender.MessageProducer;
@RestController
public class KafkaRestController {
@Autowired
private MessageProducer producer;
@Autowired
private MessageRepository messageRepo;
application.yml
server:
port: 9090
spring:
kafka:
producer:
bootstrap-servers: localhost:9092
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer:
org.apache.kafka.common.serialization.StringSerializer
consumer:
bootstrap-servers: localhost:9092
key-deserializer:
org.apache.kafka.common.serialization.StringDeserializer
value-deserializer:
org.apache.kafka.common.serialization.StringDeserializer
myapp:
kafka:
topic: myKafkaTest
Your project structure would look like something below screen.
How to test the Application?
In order to test the application, follow below steps.
1) Start Zookeeper
cmd>cd C:\kafka_2.12-2.6.0
cmd> .\bin\windows\zookeeper-server-start.bat .\config\
zookeeper.properties
2) Start Kafka setup
cmd> cd C:\kafka_2.12-2.6.0
cmd> .\bin\windows\kafka-server-start.bat .\config\server.properties
3) Create a Topic
http://localhost:9090/getAll
6) Also check your output in the console. You will see output
something like below screen.