- Building Data Streaming Applications with Apache Kafka
- Manish Kumar Chanchal Singh
- 554字
- 2022-07-12 10:38:12
Replication and replicated logs
Replication is one of the most important factors in achieving reliability for Kafka systems. Replicas of message logs for each topic partition are maintained across different servers in a Kafka cluster. This can be configured for each topic separately. What it essentially means is that for one topic, you can have the replication factor as 3 and for another, you can use 5. All the reads and writes happen through the leader; if the leader fails, one of the followers will be elected as leader.
Generally, followers keep a copy of the leader's log, which means that the leader does not make the message as committed until it receives acknowledgment from all the followers. There are different ways that the log replication algorithm has been implemented; it should ensure that, if leader tells the producer that the message is committed, it must be available for the consumer to read it.
To maintain such replica consistency, there are two approaches. In both approaches, there will be a leader through which all the read and write requests will be processed. There is a slight difference in replica management and leader election:
- Quorum-based approach: In this approach, the leader will mark messages committed only when the majority of replicas have an acknowledged receiving the message. If the leader fails, the election of the new a leader will only happen with coordination between followers. There are many algorithms that exist for electing leader and going to depth of those algorithm is beyond the scope of this book. Zookeeper follows a quorum-based approach for leader election.
- Primary backup approach: Kafka follows a different approach to maintaining replicas; the leader in Kafka waits for an acknowledgement from all the followers before marking the message as committed. If the leader fails, any of the followers can take over as leader.
This approach can cost you more in terms of latency and throughput but this will guarantee better consistency for messages or data. Each leader records an in-sync replica set abbreviated to in sync replica (ISR). This means that for each partition, we will have a leader and ISR stored in Zookeeper. Now the writes and reads will happen as follows:
-
- Write: All the leaders and followers have their own local log where they maintain the log end offset that represents the tail of the log. The last committed message offset is called the High Watermark. When a client requests to write a message to partition, it first picks the leader of the partition from Zookeeper and creates a write request. The leader writes a message to the log and subsequently waits for the followers in ISR to return an acknowledgement. Once acknowledgement is received, it simply increases the pointer to High Watermark and sends an acknowledgment to the client. If any followers present in ISR fail, the leader simply drops them from ISR and continues its operation with other followers. Once failed followers come back, they catch up with a leader by making the logs sync. Now, the leader adds this follower to ISR again.
- Read: All the reads happen through the leader only. The message that is acknowledged successfully by the leader will be available for the client to read.
Here is the diagram that will clear the Kafka Log Implementation:

- Mastering Zabbix(Second Edition)
- Hands-On Machine Learning with scikit:learn and Scientific Python Toolkits
- Leap Motion Development Essentials
- Mastering Entity Framework
- Elastic Stack應(yīng)用寶典
- Banana Pi Cookbook
- Unity 5.x By Example
- KnockoutJS Starter
- bbPress Complete
- Node.js:來一打 C++ 擴(kuò)展
- Oracle GoldenGate 12c Implementer's Guide
- ASP.NET程序開發(fā)范例寶典
- Mastering Bootstrap 4
- Python面向?qū)ο缶幊蹋ǖ?版)
- 前端程序員面試算法寶典