Apache Kafka - Introduction
Apache Kafka - Introduction
Apache Kafka Tutorial In Big Data, an enormous volume of data is used. Regarding data, we have two main challenges.The
first challenge is how to collect large volume of data and the second challenge is to analyze the
Apache Kafka - Home collected data. To overcome those challenges, you must need a messaging system.
Apache Kafka - Introduction Kafka is designed for distributed high throughput systems. Kafka tends to work very well as a
Apache Kafka - Fundamentals replacement for a more traditional message broker. In comparison to other messaging systems, Kafka
has better throughput, built-in partitioning, replication and inherent fault-tolerance, which makes it a
Apache Kafka - Cluster Architecture good fit for large-scale message processing applications.
Apache Kafka - Work Flow
Selected Reading
HR Interview Questions
What is Kafka?
Apache Kafka is a distributed publish-subscribe messaging system and a robust queue that can handle
a high volume of data and enables you to pass messages from one end-point to another. Kafka is
suitable for both offline and online message consumption. Kafka messages are persisted on the disk
and replicated within the cluster to prevent data loss. Kafka is built on top of the ZooKeeper
synchronization service. It integrates very well with Apache Storm and Spark for real-time streaming
data analysis.
Benefits
Following are a few benefits of Kafka −
Reliability − Kafka is distributed, partitioned, replicated and fault tolerance.
Durability − Kafka uses "Distributed commit log" which means messages persists on disk as
fast as possible, hence it is durable..
Performance − Kafka has high throughput for both publishing and subscribing messages. It
maintains stable performance even many TB of messages are stored.
Kafka is very fast and guarantees zero downtime and zero data loss.
Use Cases
Kafka can be used in many Use Cases. Some of them are listed below −
Metrics − Kafka is often used for operational monitoring data. This involves aggregating
statistics from distributed applications to produce centralized feeds of operational data.
Log Aggregation Solution − Kafka can be used across an organization to collect logs from
multiple services and make them available in a standard format to multiple con-sumers.
Stream Processing − Popular frameworks such as Storm and Spark Streaming read data
from a topic, processes it, and write processed data to a new topic where it becomes
available for users and applications. Kafka’s strong durability is also very useful in the context
of stream processing.
Advertisements