
Let’s now take a look at the steps involved in KRaft leader election.
#Controlplane app upgrade
Leader ElectionĬontroller leader election is required when the cluster is started, as well as when the current leader stops, either as part of a rolling upgrade or due to a failure. Another difference is that metadata records are flushed to disk immediately as they are written to each node’s local log. So, there is no ISR involved in metadata replication. However, when a leader needs to be elected, this is done via quorum, rather than an in-sync replica set. We still use offsets and leader epochs the same as with the data plane. The other controllers are followers and will fetch those changes. The active controller is the leader of the metadata topic’s single partition and it will receive all writes. Since cluster metadata is stored in a Kafka topic, replication of that data is very similar to what we saw in the data plane replication module. This makes it very efficient to keep all the controllers and brokers in sync, and also shortens restart times of brokers and controllers. So, rather than the controller broadcasting metadata changes to the other controllers or to brokers, they each fetch the changes. The active controller is the leader of this internal metadata topic’s single partition. KRaft uses this topic to synchronize cluster state changes across controller and broker nodes. In KRaft mode, cluster metadata, reflecting the current state of all controller managed resources, is stored in a single partition Kafka topic called _cluster_metadata. KRaft is based upon the Raft consensus protocol which was introduced to Kafka as part of KIP-500 with additional details defined in other related KIPs. This is one of the features of KRaft that make it so much more efficient than the ZooKeeper-based control plane. One of these controller brokers will be the active controller and it will handle communicating changes to metadata with the other brokers.Īll of the controller brokers maintain an in-memory metadata cache that is kept up to date, so that any controller can take over as the active controller if needed.

This allows all of the brokers to communicate with the controllers. The brokers that serve as controllers, in a KRaft mode cluster, are listed in the configuration property that is set on each broker. Which way to go will depend on the size of your cluster. For shared mode, some nodes will have process.roles set to controller, broker and those nodes will do double duty. In dedicated mode, some nodes will have their process.roles configuration set to controller, and the rest of the nodes will have it set to broker. In KRaft mode, a Kafka cluster can run in dedicated or shared mode. More efficient metadata propagation – Log-based, event-driven metadata propagation results in improved performance for many of Kafka’s core functions.With ZooKeeper the effective limit was in the tens of thousands.

This allows us to efficiently scale to millions of partitions in a single cluster.

Improved scalability – As shown in the diagram, recovery time is an order of magnitude faster with KRaft than with ZooKeeper.This also makes it easier to take advantage of Kafka in smaller devices at the edge.
#Controlplane app install

In this module, we’ll shift our focus and look at how cluster metadata is managed by the control plane.
