This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

AK 4.0.X

Documentation for AK 4.0.X

1 - Getting Started

This section provides an overview of what Kafka is, why it is useful, and how to get started using it.

1.1 - Introduction

What is event streaming?

Event streaming is the digital equivalent of the human body’s central nervous system. It is the technological foundation for the ‘always-on’ world where businesses are increasingly software-defined and automated, and where the user of software is more software.

Technically speaking, event streaming is the practice of capturing data in real-time from event sources like databases, sensors, mobile devices, cloud services, and software applications in the form of streams of events; storing these event streams durably for later retrieval; manipulating, processing, and reacting to the event streams in real-time as well as retrospectively; and routing the event streams to different destination technologies as needed. Event streaming thus ensures a continuous flow and interpretation of data so that the right information is at the right place, at the right time.

What can I use event streaming for?

Event streaming is applied to a wide variety of use cases across a plethora of industries and organizations. Its many examples include:

  • To process payments and financial transactions in real-time, such as in stock exchanges, banks, and insurances.
  • To track and monitor cars, trucks, fleets, and shipments in real-time, such as in logistics and the automotive industry.
  • To continuously capture and analyze sensor data from IoT devices or other equipment, such as in factories and wind parks.
  • To collect and immediately react to customer interactions and orders, such as in retail, the hotel and travel industry, and mobile applications.
  • To monitor patients in hospital care and predict changes in condition to ensure timely treatment in emergencies.
  • To connect, store, and make available data produced by different divisions of a company.
  • To serve as the foundation for data platforms, event-driven architectures, and microservices.

Apache Kafka® is an event streaming platform. What does that mean?

Kafka combines three key capabilities so you can implement your use cases for event streaming end-to-end with a single battle-tested solution:

  1. To publish (write) and subscribe to (read) streams of events, including continuous import/export of your data from other systems.
  2. To store streams of events durably and reliably for as long as you want.
  3. To process streams of events as they occur or retrospectively.

And all this functionality is provided in a distributed, highly scalable, elastic, fault-tolerant, and secure manner. Kafka can be deployed on bare-metal hardware, virtual machines, and containers, and on-premises as well as in the cloud. You can choose between self-managing your Kafka environments and using fully managed services offered by a variety of vendors.

How does Kafka work in a nutshell?

Kafka is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud environments.

Servers : Kafka is run as a cluster of one or more servers that can span multiple datacenters or cloud regions. Some of these servers form the storage layer, called the brokers. Other servers run Kafka Connect to continuously import and export data as event streams to integrate Kafka with your existing systems such as relational databases as well as other Kafka clusters. To let you implement mission-critical use cases, a Kafka cluster is highly scalable and fault-tolerant: if any of its servers fails, the other servers will take over their work to ensure continuous operations without any data loss.

Clients : They allow you to write distributed applications and microservices that read, write, and process streams of events in parallel, at scale, and in a fault-tolerant manner even in the case of network problems or machine failures. Kafka ships with some such clients included, which are augmented by dozens of clients provided by the Kafka community: clients are available for Java and Scala including the higher-level Kafka Streams library, for Go, Python, C/C++, and many other programming languages as well as REST APIs.

Main Concepts and Terminology

An event records the fact that “something happened” in the world or in your business. It is also called record or message in the documentation. When you read or write data to Kafka, you do this in the form of events. Conceptually, an event has a key, value, timestamp, and optional metadata headers. Here’s an example event:

  • Event key: “Alice”
  • Event value: “Made a payment of $200 to Bob”
  • Event timestamp: “Jun. 25, 2020 at 2:06 p.m.”

Producers are those client applications that publish (write) events to Kafka, and consumers are those that subscribe to (read and process) these events. In Kafka, producers and consumers are fully decoupled and agnostic of each other, which is a key design element to achieve the high scalability that Kafka is known for. For example, producers never need to wait for consumers. Kafka provides various guarantees such as the ability to process events exactly-once.

Events are organized and durably stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder. An example topic name could be “payments”. Topics in Kafka are always multi-producer and multi-subscriber: a topic can have zero, one, or many producers that write events to it, as well as zero, one, or many consumers that subscribe to these events. Events in a topic can be read as often as needed—unlike traditional messaging systems, events are not deleted after consumption. Instead, you define for how long Kafka should retain your events through a per-topic configuration setting, after which old events will be discarded. Kafka’s performance is effectively constant with respect to data size, so storing data for a long time is perfectly fine.

Topics are partitioned , meaning a topic is spread over a number of “buckets” located on different Kafka brokers. This distributed placement of your data is very important for scalability because it allows client applications to both read and write the data from/to many brokers at the same time. When a new event is published to a topic, it is actually appended to one of the topic’s partitions. Events with the same event key (e.g., a customer or vehicle ID) are written to the same partition, and Kafka guarantees that any consumer of a given topic-partition will always read that partition’s events in exactly the same order as they were written.

Figure: This example topic has four partitions P1–P4. Two different producer clients are publishing, independently from each other, new events to the topic by writing events over the network to the topic’s partitions. Events with the same key (denoted by their color in the figure) are written to the same partition. Note that both producers can write to the same partition if appropriate.

To make your data fault-tolerant and highly-available, every topic can be replicated , even across geo-regions or datacenters, so that there are always multiple brokers that have a copy of the data just in case things go wrong, you want to do maintenance on the brokers, and so on. A common production setting is a replication factor of 3, i.e., there will always be three copies of your data. This replication is performed at the level of topic-partitions.

This primer should be sufficient for an introduction. The Design section of the documentation explains Kafka’s various concepts in full detail, if you are interested.

Kafka APIs

In addition to command line tooling for management and administration tasks, Kafka has five core APIs for Java and Scala:

  • The Admin API to manage and inspect topics, brokers, and other Kafka objects.
  • The Producer API to publish (write) a stream of events to one or more Kafka topics.
  • The Consumer API to subscribe to (read) one or more topics and to process the stream of events produced to them.
  • The Kafka Streams API to implement stream processing applications and microservices. It provides higher-level functions to process event streams, including transformations, stateful operations like aggregations and joins, windowing, processing based on event-time, and more. Input is read from one or more topics in order to generate output to one or more topics, effectively transforming the input streams to output streams.
  • The Kafka Connect API to build and run reusable data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications so they can integrate with Kafka. For example, a connector to a relational database like PostgreSQL might capture every change to a set of tables. However, in practice, you typically don’t need to implement your own connectors because the Kafka community already provides hundreds of ready-to-use connectors.

Where to go from here

1.2 - Use Cases

Here is a description of a few of the popular use cases for Apache Kafka®. For an overview of a number of these areas in action, see this blog post.

Messaging

Kafka works well as a replacement for a more traditional message broker. Message brokers are used for a variety of reasons (to decouple processing from data producers, to buffer unprocessed messages, etc). In comparison to most messaging systems Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large scale message processing applications.

In our experience messaging uses are often comparatively low-throughput, but may require low end-to-end latency and often depend on the strong durability guarantees Kafka provides.

In this domain Kafka is comparable to traditional messaging systems such as ActiveMQ or RabbitMQ.

Website Activity Tracking

The original use case for Kafka was to be able to rebuild a user activity tracking pipeline as a set of real-time publish-subscribe feeds. This means site activity (page views, searches, or other actions users may take) is published to central topics with one topic per activity type. These feeds are available for subscription for a range of use cases including real-time processing, real-time monitoring, and loading into Hadoop or offline data warehousing systems for offline processing and reporting.

Activity tracking is often very high volume as many activity messages are generated for each user page view.

Metrics

Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

Log Aggregation

Many people use Kafka as a replacement for a log aggregation solution. Log aggregation typically collects physical log files off servers and puts them in a central place (a file server or HDFS perhaps) for processing. Kafka abstracts away the details of files and gives a cleaner abstraction of log or event data as a stream of messages. This allows for lower-latency processing and easier support for multiple data sources and distributed data consumption. In comparison to log-centric systems like Scribe or Flume, Kafka offers equally good performance, stronger durability guarantees due to replication, and much lower end-to-end latency.

Stream Processing

Many users of Kafka process data in processing pipelines consisting of multiple stages, where raw input data is consumed from Kafka topics and then aggregated, enriched, or otherwise transformed into new topics for further consumption or follow-up processing. For example, a processing pipeline for recommending news articles might crawl article content from RSS feeds and publish it to an “articles” topic; further processing might normalize or deduplicate this content and publish the cleansed article content to a new topic; a final processing stage might attempt to recommend this content to users. Such processing pipelines create graphs of real-time data flows based on the individual topics. Starting in 0.10.0.0, a light-weight but powerful stream processing library called Kafka Streams is available in Apache Kafka to perform such data processing as described above. Apart from Kafka Streams, alternative open source stream processing tools include Apache Storm and Apache Samza.

Event Sourcing

Event sourcing is a style of application design where state changes are logged as a time-ordered sequence of records. Kafka’s support for very large stored log data makes it an excellent backend for an application built in this style.

Commit Log

Kafka can serve as a kind of external commit-log for a distributed system. The log helps replicate data between nodes and acts as a re-syncing mechanism for failed nodes to restore their data. The log compaction feature in Kafka helps support this usage. In this usage Kafka is similar to Apache BookKeeper project.

1.3 - Quick Start

Step 1: Get Kafka

Download the latest Kafka release and extract it:

$ tar -xzf kafka_2.13-4.0.0.tgz
$ cd kafka_2.13-4.0.0

Step 2: Start the Kafka environment

NOTE: Your local environment must have Java 17+ installed.

Kafka can be run using local scripts and downloaded files or the docker image.

Using downloaded files

Generate a Cluster UUID

$ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"

Format Log Directories

$ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties

Start the Kafka Server

$ bin/kafka-server-start.sh config/server.properties

Once the Kafka server has successfully launched, you will have a basic Kafka environment running and ready to use.

Using JVM Based Apache Kafka Docker Image

Get the Docker image:

$ docker pull apache/kafka:4.0.0

Start the Kafka Docker container:

$ docker run -p 9092:9092 apache/kafka:4.0.0

Using GraalVM Based Native Apache Kafka Docker Image

Get the Docker image:

$ docker pull apache/kafka-native:4.0.0

Start the Kafka Docker container:

$ docker run -p 9092:9092 apache/kafka-native:4.0.0

Step 3: Create a topic to store your events

Kafka is a distributed event streaming platform that lets you read, write, store, and process events (also called records or messages in the documentation) across many machines.

Example events are payment transactions, geolocation updates from mobile phones, shipping orders, sensor measurements from IoT devices or medical equipment, and much more. These events are organized and stored in topics. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder.

So before you can write your first events, you must create a topic. Open another terminal session and run:

$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092

All of Kafka’s command line tools have additional options: run the kafka-topics.sh command without any arguments to display usage information. For example, it can also show you details such as the partition count of the new topic:

$ bin/kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
Topic: quickstart-events        TopicId: NPmZHyhbR9y00wMglMH2sg PartitionCount: 1       ReplicationFactor: 1	Configs:
Topic: quickstart-events Partition: 0    Leader: 0   Replicas: 0 Isr: 0

Step 4: Write some events into the topic

A Kafka client communicates with the Kafka brokers via the network for writing (or reading) events. Once received, the brokers will store the events in a durable and fault-tolerant manner for as long as you need—even forever.

Run the console producer client to write a few events into your topic. By default, each line you enter will result in a separate event being written to the topic.

$ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
>This is my first event
>This is my second event

You can stop the producer client with Ctrl-C at any time.

Step 5: Read the events

Open another terminal session and run the console consumer client to read the events you just created:

$ bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
This is my first event
This is my second event

You can stop the consumer client with Ctrl-C at any time.

Feel free to experiment: for example, switch back to your producer terminal (previous step) to write additional events, and see how the events immediately show up in your consumer terminal.

Because events are durably stored in Kafka, they can be read as many times and by as many consumers as you want. You can easily verify this by opening yet another terminal session and re-running the previous command again.

Step 6: Import/export your data as streams of events with Kafka Connect

You probably have lots of data in existing systems like relational databases or traditional messaging systems, along with many applications that already use these systems. Kafka Connect allows you to continuously ingest data from external systems into Kafka, and vice versa. It is an extensible tool that runs connectors , which implement the custom logic for interacting with an external system. It is thus very easy to integrate existing systems with Kafka. To make this process even easier, there are hundreds of such connectors readily available.

In this quickstart we’ll see how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a Kafka topic to a file.

First, make sure to add connect-file-4.0.0.jar to the plugin.path property in the Connect worker’s configuration. For the purpose of this quickstart we’ll use a relative path and consider the connectors’ package as an uber jar, which works when the quickstart commands are run from the installation directory. However, it’s worth noting that for production deployments using absolute paths is always preferable. See plugin.path for a detailed description of how to set this config.

Edit the config/connect-standalone.properties file, add or change the plugin.path configuration property match the following, and save the file:

$ echo "plugin.path=libs/connect-file-4.0.0.jar" >> config/connect-standalone.properties

Then, start by creating some seed data to test with:

$ echo -e "foo
bar" > test.txt

Or on Windows:

$ echo foo > test.txt
$ echo bar >> test.txt

Next, we’ll start two connectors running in standalone mode, which means they run in a single, local, dedicated process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data. The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector class to instantiate, and any other configuration required by the connector.

$ bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties

These sample configuration files, included with Kafka, use the default local cluster configuration you started earlier and create two connectors: the first is a source connector that reads lines from an input file and produces each to a Kafka topic and the second is a sink connector that reads messages from a Kafka topic and produces each as a line in an output file.

During startup you’ll see a number of log messages, including some indicating that the connectors are being instantiated. Once the Kafka Connect process has started, the source connector should start reading lines from test.txt and producing them to the topic connect-test, and the sink connector should start reading messages from the topic connect-test and write them to the file test.sink.txt. We can verify the data has been delivered through the entire pipeline by examining the contents of the output file:

$ more test.sink.txt
foo
bar

Note that the data is being stored in the Kafka topic connect-test, so we can also run a console consumer to see the data in the topic (or use custom consumer code to process it):

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}
…

The connectors continue to process data, so we can add data to the file and see it move through the pipeline:

$ echo "Another line" >> test.txt

You should see the line appear in the console consumer output and in the sink file.

Step 7: Process your events with Kafka Streams

Once your data is stored in Kafka as events, you can process the data with the Kafka Streams client library for Java/Scala. It allows you to implement mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka topics. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka’s server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, and distributed. The library supports exactly-once processing, stateful operations and aggregations, windowing, joins, processing based on event-time, and much more.

To give you a first taste, here’s how one would implement the popular WordCount algorithm:

KStream<String, String> textLines = builder.stream("quickstart-events");

KTable<String, Long> wordCounts = textLines
            .flatMapValues(line -> Arrays.asList(line.toLowerCase().split(" ")))
            .groupBy((keyIgnored, word) -> word)
            .count();

wordCounts.toStream().to("output-topic", Produced.with(Serdes.String(), Serdes.Long()));

The Kafka Streams demo and the app development tutorial demonstrate how to code and run such a streaming application from start to finish.

Step 8: Terminate the Kafka environment

Now that you reached the end of the quickstart, feel free to tear down the Kafka environment—or continue playing around.

  1. Stop the producer and consumer clients with Ctrl-C, if you haven’t done so already.
  2. Stop the Kafka broker with Ctrl-C.

If you also want to delete any data of your local Kafka environment including any events you have created along the way, run the command:

$ rm -rf /tmp/kafka-logs /tmp/kraft-combined-logs

Congratulations!

You have successfully finished the Apache Kafka quickstart.

To learn more, we suggest the following next steps:

1.4 - Ecosystem

There are a plethora of tools that integrate with Kafka outside the main distribution. The ecosystem page lists many of these, including stream processing systems, Hadoop integration, monitoring, and deployment tools.

1.5 - Upgrading

Upgrading to 4.0.0

Upgrading Clients to 4.0.0

For a rolling upgrade:

  1. Upgrade the clients one at a time: shut down the client, update the code, and restart it.
  2. Clients (including Streams and Connect) must be on version 2.1 or higher before upgrading to 4.0. Many deprecated APIs were removed in Kafka 4.0. For more information about the compatibility, please refer to the compatibility matrix or KIP-1124.

Upgrading Servers to 4.0.0 from any version 3.3.x through 3.9.x

Note: Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been removed. As such, broker upgrades to 4.0.0 (and higher) require KRaft mode and the software and metadata versions must be at least 3.3.x (the first version when KRaft mode was deemed production ready). For clusters in KRaft mode with versions older than 3.3.x, we recommend upgrading to 3.9.x before upgrading to 4.0.x. Clusters in ZooKeeper mode have to be migrated to KRaft mode before they can be upgraded to 4.0.x.

For a rolling upgrade:

  1. Upgrade the brokers one at a time: shut down the broker, update the code, and restart it. Once you have done so, the brokers will be running the latest version and you can verify that the cluster’s behavior and performance meets expectations.
  2. Once the cluster’s behavior and performance has been verified, finalize the upgrade by running bin/kafka-features.sh --bootstrap-server localhost:9092 upgrade --release-version 4.0
  3. Note that cluster metadata downgrade is not supported in this version since it has metadata changes. Every MetadataVersion has a boolean parameter that indicates if there are metadata changes (i.e. IBP_4_0_IV1(23, "4.0", "IV1", true) means this version has metadata changes). Given your current and target versions, a downgrade is only possible if there are no metadata changes in the versions between.

Notable changes in 4.0.0

  • Old protocol API versions have been removed. Users should ensure brokers are version 2.1 or higher before upgrading Java clients (including Connect and Kafka Streams which use the clients internally) to 4.0. Similarly, users should ensure their Java clients (including Connect and Kafka Streams) version is 2.1 or higher before upgrading brokers to 4.0. Finally, care also needs to be taken when it comes to kafka clients that are not part of Apache Kafka, please see KIP-896 for the details.
  • Apache Kafka 4.0 only supports KRaft mode - ZooKeeper mode has been removed. About version upgrade, check Upgrading to 4.0.0 from any version 3.3.x through 3.9.x for more info.
  • Apache Kafka 4.0 ships with a brand-new group coordinator implementation (See here). Functionally speaking, it implements all the same APIs. There are reasonable defaults, but the behavior of the new group coordinator can be tuned by setting the configurations with prefix group.coordinator.
  • The Next Generation of the Consumer Rebalance Protocol (KIP-848) is now Generally Available (GA) in Apache Kafka 4.0. The protocol is automatically enabled on the server when the upgrade to 4.0 is finalized. Note that once the new protocol is used by consumer groups, the cluster can only downgrade to version 3.4.1 or newer. Check here for details.
  • Transactions Server Side Defense (KIP-890) brings a strengthened transactional protocol to Apache Kafka 4.0. The new and improved transactional protocol is enabled when the upgrade to 4.0 is finalized. When using 4.0 producer clients, the producer epoch is bumped on every transaction to ensure every transaction includes the intended messages and duplicates are not written as part of the next transaction. Downgrading the protocol is safe. For more information check here
  • Eligible Leader Replicas (KIP-966 Part 1) enhances the replication protocol for the Apache Kafka 4.0. Now the KRaft controller keeps track of the data partition replicas that are not included in ISR but are safe to be elected as leader without data loss. Such replicas are stored in the partition metadata as the Eligible Leader Replicas(ELR). For more information check here
  • Since Apache Kafka 4.0.0, we have added a system property (“org.apache.kafka.sasl.oauthbearer.allowed.urls”) to set the allowed URLs as SASL OAUTHBEARER token or jwks endpoints. By default, the value is an empty list. Users should explicitly set the allowed list if necessary.
  • A number of deprecated classes, methods, configurations and tools have been removed.
    • Common
      • The metrics.jmx.blacklist and metrics.jmx.whitelist configurations were removed from the org.apache.kafka.common.metrics.JmxReporter Please use metrics.jmx.exclude and metrics.jmx.include respectively instead.
      • The auto.include.jmx.reporter configuration was removed. The metric.reporters configuration is now set to org.apache.kafka.common.metrics.JmxReporter by default.
      • The constructor org.apache.kafka.common.metrics.JmxReporter with string argument was removed. See KIP-606 for details.
      • The bufferpool-wait-time-total, io-waittime-total, and iotime-total metrics were removed. Please use bufferpool-wait-time-ns-total, io-wait-time-ns-total, and io-time-ns-total metrics as replacements, respectively.
      • The kafka.common.requests.DescribeLogDirsResponse.LogDirInfo class was removed. Please use the kafka.clients.admin.DescribeLogDirsResult.descriptions() class and kafka.clients.admin.DescribeLogDirsResult.allDescriptions()instead.
      • The kafka.common.requests.DescribeLogDirsResponse.ReplicaInfo class was removed. Please use the kafka.clients.admin.DescribeLogDirsResult.descriptions() class and kafka.clients.admin.DescribeLogDirsResult.allDescriptions()instead.
      • The org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler class was removed. Please use the org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginCallbackHandler class instead.
      • The org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerValidatorCallbackHandler class was removed. Please use the org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallbackHandler class instead.
      • The org.apache.kafka.common.errors.NotLeaderForPartitionException class was removed. The org.apache.kafka.common.errors.NotLeaderOrFollowerException is returned if a request could not be processed because the broker is not the leader or follower for a topic partition.
      • The org.apache.kafka.clients.producer.internals.DefaultPartitioner and org.apache.kafka.clients.producer.UniformStickyPartitioner class was removed.
      • The log.message.format.version and message.format.version configs were removed.
      • The function onNewBatch in org.apache.kafka.clients.producer.Partitioner class was removed.
      • The default properties files for KRaft mode are no longer stored in the separate config/kraft directory since Zookeeper has been removed. These files have been consolidated with other configuration files. Now all configuration files are in config directory.
      • The valid format for --bootstrap-server only supports comma-separated value, such as host1:port1,host2:port2,.... Providing other formats, like space-separated bootstrap servers (e.g., host1:port1 host2:port2 host3:port3), will result in an exception, even though this was allowed in Apache Kafka versions prior to 4.0.
    • Broker
      • The delegation.token.master.key configuration was removed. Please use delegation.token.secret.key instead.
      • The offsets.commit.required.acks configuration was removed. See KIP-1041 for details.
      • The log.message.timestamp.difference.max.ms configuration was removed. Please use log.message.timestamp.before.max.ms and log.message.timestamp.after.max.ms instead. See KIP-937 for details.
      • The remote.log.manager.copier.thread.pool.size configuration default value was changed to 10 from -1. Values of -1 are no longer valid. A minimum of 1 or higher is valid. See KIP-1030
      • The remote.log.manager.expiration.thread.pool.size configuration default value was changed to 10 from -1. Values of -1 are no longer valid. A minimum of 1 or higher is valid. See KIP-1030
      • The remote.log.manager.thread.pool.size configuration default value was changed to 2 from 10. See KIP-1030
      • The minimum segment.bytes/log.segment.bytes has changed from 14 bytes to 1MB. See KIP-1030
    • MirrorMaker
      • The original MirrorMaker (MM1) and related classes were removed. Please use the Connect-based MirrorMaker (MM2), as described in the Geo-Replication section..
      • The use.incremental.alter.configs configuration was removed from MirrorSourceConnector. The modified behavior is identical to the previous required configuration, therefore users should ensure that brokers in the target cluster are at least running 2.3.0.
      • The add.source.alias.to.metrics configuration was removed from MirrorSourceConnector. The source cluster alias is now always added to the metrics.
      • The config.properties.blacklist was removed from the org.apache.kafka.connect.mirror.MirrorSourceConfig Please use config.properties.exclude instead.
      • The topics.blacklist was removed from the org.apache.kafka.connect.mirror.MirrorSourceConfig Please use topics.exclude instead.
      • The groups.blacklist was removed from the org.apache.kafka.connect.mirror.MirrorSourceConfig Please use groups.exclude instead.
    • Tools
      • The kafka.common.MessageReader class was removed. Please use the org.apache.kafka.tools.api.RecordReader interface to build custom readers for the kafka-console-producer tool.
      • The kafka.tools.DefaultMessageFormatter class was removed. Please use the org.apache.kafka.tools.consumer.DefaultMessageFormatter class instead.
      • The kafka.tools.LoggingMessageFormatter class was removed. Please use the org.apache.kafka.tools.consumer.LoggingMessageFormatter class instead.
      • The kafka.tools.NoOpMessageFormatter class was removed. Please use the org.apache.kafka.tools.consumer.NoOpMessageFormatter class instead.
      • The --whitelist option was removed from the kafka-console-consumer command line tool. Please use --include instead.
      • Redirections from the old tools packages have been removed: kafka.admin.FeatureCommand, kafka.tools.ClusterTool, kafka.tools.EndToEndLatency, kafka.tools.StateChangeLogMerger, kafka.tools.StreamsResetter, kafka.tools.JmxTool.
      • The --authorizer, --authorizer-properties, and --zk-tls-config-file options were removed from the kafka-acls command line tool. Please use --bootstrap-server or --bootstrap-controller instead.
      • The kafka.serializer.Decoder trait was removed, please use the org.apache.kafka.tools.api.Decoder interface to build custom decoders for the kafka-dump-log tool.
      • The kafka.coordinator.group.OffsetsMessageFormatter class was removed. Please use the org.apache.kafka.tools.consumer.OffsetsMessageFormatter class instead.
      • The kafka.coordinator.group.GroupMetadataMessageFormatter class was removed. Please use the org.apache.kafka.tools.consumer.GroupMetadataMessageFormatter class instead.
      • The kafka.coordinator.transaction.TransactionLogMessageFormatter class was removed. Please use the org.apache.kafka.tools.consumer.TransactionLogMessageFormatter class instead.
      • The --topic-white-list option was removed from the kafka-replica-verification command line tool. Please use --topics-include instead.
      • The --broker-list option was removed from the kafka-verifiable-consumer command line tool. Please use --bootstrap-server instead.
      • kafka-configs.sh now uses incrementalAlterConfigs API to alter broker configurations instead of the deprecated alterConfigs API, and it will fall directly if the broker doesn’t support incrementalAlterConfigs API, which means the broker version is prior to 2.3.x. See KIP-1011 for more details.
      • The kafka.admin.ZkSecurityMigrator tool was removed.
    • Connect
      • The whitelist and blacklist configurations were removed from the org.apache.kafka.connect.transforms.ReplaceField transformation. Please use include and exclude respectively instead.
      • The onPartitionsRevoked(Collection<TopicPartition>) and onPartitionsAssigned(Collection<TopicPartition>) methods were removed from SinkTask.
      • The commitRecord(SourceRecord) method was removed from SourceTask.
    • Consumer
      • The poll(long) method was removed from the consumer. Please use poll(Duration) instead. Note that there is a difference in behavior between the two methods. The poll(Duration) method does not block beyond the timeout awaiting partition assignment, whereas the earlier poll(long) method used to wait beyond the timeout.
      • The committed(TopicPartition) and committed(TopicPartition, Duration) methods were removed from the consumer. Please use committed(Set&ltTopicPartition;>) and committed(Set&ltTopicPartition;>, Duration) instead.
      • The setException(KafkaException) method was removed from the org.apache.kafka.clients.consumer.MockConsumer. Please use setPollException(KafkaException) instead.
    • Producer
      • The enable.idempotence configuration will no longer automatically fall back when the max.in.flight.requests.per.connection value exceeds 5.
      • The deprecated sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata>, String) method has been removed from the Producer API.
      • The default linger.ms changed from 0 to 5 in Apache Kafka 4.0 as the efficiency gains from larger batches typically result in similar or lower producer latency despite the increased linger.
    • Admin client
      • The alterConfigs method was removed from the org.apache.kafka.clients.admin.Admin. Please use incrementalAlterConfigs instead.
      • The org.apache.kafka.common.ConsumerGroupState enumeration and related methods have been deprecated. Please use GroupState instead which applies to all types of group.
      • The Admin.describeConsumerGroups method used to return a ConsumerGroupDescription in state DEAD if the group ID was not found. In Apache Kafka 4.0, the GroupIdNotFoundException is thrown instead as part of the support for new types of group.
      • The org.apache.kafka.clients.admin.DeleteTopicsResult.values() method was removed. Please use org.apache.kafka.clients.admin.DeleteTopicsResult.topicNameValues() instead.
      • The org.apache.kafka.clients.admin.TopicListing.TopicListing(String, boolean) method was removed. Please use org.apache.kafka.clients.admin.TopicListing.TopicListing(String, Uuid, boolean) instead.
      • The org.apache.kafka.clients.admin.ListConsumerGroupOffsetsOptions.topicPartitions(List<TopicPartition>) method was removed. Please use org.apache.kafka.clients.admin.Admin.listConsumerGroupOffsets(Map<String, ListConsumerGroupOffsetsSpec>, ListConsumerGroupOffsetsOptions) instead.
      • The deprecated dryRun methods were removed from the org.apache.kafka.clients.admin.UpdateFeaturesOptions. Please use validateOnly instead.
      • The constructor org.apache.kafka.clients.admin.FeatureUpdate with short and boolean arguments was removed. Please use the constructor that accepts short and the specified UpgradeType enum instead.
      • The allowDowngrade method was removed from the org.apache.kafka.clients.admin.FeatureUpdate.
      • The org.apache.kafka.clients.admin.DescribeTopicsResult.DescribeTopicsResult(Map<String, KafkaFuture<TopicDescription>>) method was removed. Please use org.apache.kafka.clients.admin.DescribeTopicsResult.DescribeTopicsResult(Map<Uuid, KafkaFuture<TopicDescription>>, Map<String, KafkaFuture<TopicDescription>>) instead.
      • The values() method was removed from the org.apache.kafka.clients.admin.DescribeTopicsResult. Please use topicNameValues() instead.
      • The all() method was removed from the org.apache.kafka.clients.admin.DescribeTopicsResult. Please use allTopicNames() instead.
    • Kafka Streams
      • All public API, deprecated in Apache Kafka 3.6 or an earlier release, have been removed, with the exception of JoinWindows.of() and JoinWindows#grace(). See KAFKA-17531 for details.
      • The most important changes are highlighted in the Kafka Streams upgrade guide.
      • For a full list of changes, see KAFKA-12822.
  • Other changes:
    • The minimum Java version required by clients and Kafka Streams applications has been increased from Java 8 to Java 11 while brokers, connect and tools now require Java 17. See KIP-750 and KIP-1013 for more details.
    • Java 23 support has been added in Apache Kafka 4.0
    • Scala 2.12 support has been removed in Apache Kafka 4.0 See KIP-751 for more details
    • Logging framework has been migrated from Log4j to Log4j2. Users can use the log4j-transform-cli tool to automatically convert their existing Log4j configuration files to Log4j2 format. See log4j-transform-cli for more details. Log4j2 provides limited compatibility for Log4j configurations. See Use Log4j 1 to Log4j 2 bridge for more information,
    • KafkaLog4jAppender has been removed, users should migrate to the log4j2 appender See KafkaAppender for more details
    • The --delete-config option in the kafka-topics command line tool has been deprecated.
    • For implementors of RemoteLogMetadataManager (RLMM), a new API nextSegmentWithTxnIndex is introduced in RLMM to allow the implementation to return the next segment metadata with a transaction index. This API is used when the consumers are enabled with isolation level as READ_COMMITTED. See KIP-1058 for more details.
    • The criteria for identifying internal topics in ReplicationPolicy and DefaultReplicationPolicy have been updated to enable the replication of topics that appear to be internal but aren’t truly internal to Kafka and Mirror Maker 2. See KIP-1074 for more details.
    • KIP-714 is now enabled for Kafka Streams via KIP-1076. This allows to not only collect the metric of the internally used clients of a Kafka Streams appliction via a broker-side plugin, but also to collect the metrics of the Kafka Streams runtime itself.
    • The default value of ’num.recovery.threads.per.data.dir’ has been changed from 1 to 2. The impact of this is faster recovery post unclean shutdown at the expense of extra IO cycles. See KIP-1030
    • The default value of ‘message.timestamp.after.max.ms’ has been changed from Long.Max to 1 hour. The impact of this messages with a timestamp of more than 1 hour in the future will be rejected when message.timestamp.type=CreateTime is set. See KIP-1030
    • Introduced in KIP-890, the TransactionAbortableException enhances error handling within transactional operations by clearly indicating scenarios where transactions should be aborted due to errors. It is important for applications to properly manage both TimeoutException and TransactionAbortableException when working with transaction producers.
      • TimeoutException: This exception indicates that a transactional operation has timed out. Given the risk of message duplication that can arise from retrying operations after a timeout (potentially violating exactly-once semantics), applications should treat timeouts as reasons to abort the ongoing transaction.
      • TransactionAbortableException: Specifically introduced to signal errors that should lead to transaction abortion, ensuring this exception is properly handled is critical for maintaining the integrity of transactional processing.
      • To ensure seamless operation and compatibility with future Kafka versions, developers are encouraged to update their error-handling logic to treat both exceptions as triggers for aborting transactions. This approach is pivotal for preserving exactly-once semantics.
      • See KIP-890 and KIP-1050 for more details

Upgrading to 3.9.0 and older versions

See Upgrading From Previous Versions in the 3.9 documentation.

1.6 - Docker

JVM Based Apache Kafka Docker Image

Docker is a popular container runtime. Docker images for the JVM based Apache Kafka can be found on Docker Hub and are available from version 3.7.0.

Docker image can be pulled from Docker Hub using the following command:

$ docker pull apache/kafka:4.0.0

If you want to fetch the latest version of the Docker image use following command:

$ docker pull apache/kafka:latest

To start the Kafka container using this Docker image with default configs and on default port 9092:

$ docker run -p 9092:9092 apache/kafka:4.0.0

GraalVM Based Native Apache Kafka Docker Image

Docker images for the GraalVM Based Native Apache Kafka can be found on Docker Hub and are available from version 3.8.0.
NOTE: This image is experimental and intended for local development and testing purposes only; it is not recommended for production use.

Docker image can be pulled from Docker Hub using the following command:

$ docker pull apache/kafka-native:4.0.0

If you want to fetch the latest version of the Docker image use following command:

$ docker pull apache/kafka-native:latest

To start the Kafka container using this Docker image with default configs and on default port 9092:

$ docker run -p 9092:9092 apache/kafka-native:4.0.0

Usage guide

Detailed instructions for using the Docker image are mentioned here.

2 - APIs

2.1 - API

Kafka includes five core apis:

  1. The Producer API allows applications to send streams of data to topics in the Kafka cluster.
  2. The Consumer API allows applications to read streams of data from topics in the Kafka cluster.
  3. The Streams API allows transforming streams of data from input topics to output topics.
  4. The Connect API allows implementing connectors that continually pull from some source system or application into Kafka or push from Kafka into some sink system or application.
  5. The Admin API allows managing and inspecting topics, brokers, and other Kafka objects. Kafka exposes all its functionality over a language independent protocol which has clients available in many programming languages. However only the Java clients are maintained as part of the main Kafka project, the others are available as independent open source projects. A list of non-Java clients is available here.

Producer API

The Producer API allows applications to send streams of data to topics in the Kafka cluster.

Examples showing how to use the producer are given in the javadocs.

To use the producer, you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>4.0.0</version>
</dependency>

Consumer API

The Consumer API allows applications to read streams of data from topics in the Kafka cluster.

Examples showing how to use the consumer are given in the javadocs.

To use the consumer, you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>4.0.0</version>
</dependency>

Streams API

The Streams API allows transforming streams of data from input topics to output topics.

Examples showing how to use this library are given in the javadocs.

Additional documentation on using the Streams API is available here.

To use Kafka Streams you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-streams</artifactId>
	<version>4.0.0</version>
</dependency>

When using Scala you may optionally include the kafka-streams-scala library. Additional documentation on using the Kafka Streams DSL for Scala is available in the developer guide.

To use Kafka Streams DSL for Scala 2.13 you can use the following maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-streams-scala_2.13</artifactId>
	<version>4.0.0</version>
</dependency>

Connect API

The Connect API allows implementing connectors that continually pull from some source data system into Kafka or push from Kafka into some sink data system.

Many users of Connect won’t need to use this API directly, though, they can use pre-built connectors without needing to write any code. Additional information on using Connect is available here.

Those who want to implement custom connectors can see the javadoc.

Admin API

The Admin API supports managing and inspecting topics, brokers, acls, and other Kafka objects.

To use the Admin API, add the following Maven dependency:

<dependency>
	<groupId>org.apache.kafka</groupId>
	<artifactId>kafka-clients</artifactId>
	<version>4.0.0</version>
</dependency>

For more information about the Admin APIs, see the javadoc.

3 - Configuration

3.1 - Configuration

Kafka uses key-value pairs in the property file format for configuration. These values can be supplied either from a file or programmatically.

Broker Configs

The essential configurations are the following:

  • node.id
  • log.dirs
  • process.roles
  • controller.quorum.bootstrap.servers Topic-level configurations and defaults are discussed in more detail below.
    • node.id

      The node ID associated with the roles this process is playing when process.roles is non-empty. This is required configuration when running in KRaft mode.

      Type:int
      Default:
      Valid Values:[0,...]
      Importance:high
      Update Mode:read-only
    • process.roles

      The roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both.

      Type:list
      Default:
      Valid Values:[broker, controller]
      Importance:high
      Update Mode:read-only
    • add.partitions.to.txn.retry.backoff.max.ms

      The maximum allowed timeout for adding partitions to transactions on the server side. It only applies to the actual add partition operations, not the verification. It will not be effective if it is larger than request.timeout.ms

      Type:int
      Default:100
      Valid Values:[0,...]
      Importance:high
      Update Mode:read-only
    • add.partitions.to.txn.retry.backoff.ms

      The server-side retry backoff when the server attemptsto add the partition to the transaction

      Type:int
      Default:20
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • advertised.listeners

      Specifies the listener addresses that the Kafka brokers will advertise to clients and other brokers. The config is useful where the actual listener configuration listeners does not represent the addresses that clients should use to connect, such as in cloud environments. The addresses are published to and managed by the controller, the brokers pull these data from the controller as needed. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners, it is not valid to advertise the 0.0.0.0 meta-address.
      Also unlike listeners, there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used.

      Type:string
      Default:null
      Valid Values:
      Importance:high
      Update Mode:read-only
    • auto.create.topics.enable

      Enable auto creation of topic on the server.

      Type:boolean
      Default:true
      Valid Values:
      Importance:high
      Update Mode:read-only
    • auto.leader.rebalance.enable

      Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds. If the leader is imbalanced, leader rebalance to the preferred leader for partitions is triggered.

      Type:boolean
      Default:true
      Valid Values:
      Importance:high
      Update Mode:read-only
    • background.threads

      The number of threads to use for various background processing tasks

      Type:int
      Default:10
      Valid Values:[1,...]
      Importance:high
      Update Mode:cluster-wide
    • broker.id

      The broker id for this server.

      Type:int
      Default:-1
      Valid Values:
      Importance:high
      Update Mode:read-only
    • compression.type

      Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.

      Type:string
      Default:producer
      Valid Values:[uncompressed, zstd, lz4, snappy, gzip, producer]
      Importance:high
      Update Mode:cluster-wide
    • controller.listener.names

      A comma-separated list of the names of the listeners used by the controller. This is required when communicating with the controller quorum, the broker will always use the first listener in this list.

      Type:string
      Default:null
      Valid Values:
      Importance:high
      Update Mode:read-only
    • controller.quorum.bootstrap.servers

      List of endpoints to use for bootstrapping the cluster metadata. The endpoints are specified in comma-separated list of {host}:{port} entries. For example: localhost:9092,localhost:9093,localhost:9094.

      Type:list
      Default:""
      Valid Values:non-empty list
      Importance:high
      Update Mode:read-only
    • controller.quorum.election.backoff.max.ms

      Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps prevent gridlocked elections

      Type:int
      Default:1000 (1 second)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • controller.quorum.election.timeout.ms

      Maximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election

      Type:int
      Default:1000 (1 second)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • controller.quorum.fetch.timeout.ms

      Maximum time without a successful fetch from the current leader before becoming a candidate and triggering an election for voters; Maximum time a leader can go without receiving valid fetch or fetchSnapshot request from a majority of the quorum before resigning.

      Type:int
      Default:2000 (2 seconds)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • controller.quorum.voters

      Map of id/endpoint information for the set of voters in a comma-separated list of {id}@{host}:{port} entries. For example: 1@localhost:9092,2@localhost:9093,3@localhost:9094

      Type:list
      Default:""
      Valid Values:non-empty list
      Importance:high
      Update Mode:read-only
    • delete.topic.enable

      Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off

      Type:boolean
      Default:true
      Valid Values:
      Importance:high
      Update Mode:read-only
    • early.start.listeners

      A comma-separated list of listener names which may be started before the authorizer has finished initialization. This is useful when the authorizer is dependent on the cluster itself for bootstrapping, as is the case for the StandardAuthorizer (which stores ACLs in the metadata log.) By default, all listeners included in controller.listener.names will also be early start listeners. A listener should not appear in this list if it accepts external traffic.

      Type:string
      Default:null
      Valid Values:
      Importance:high
      Update Mode:read-only
    • group.coordinator.threads

      The number of threads used by the group coordinator.

      Type:int
      Default:4
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • leader.imbalance.check.interval.seconds

      The frequency with which the partition rebalance check is triggered by the controller

      Type:long
      Default:300
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • listeners

      Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set.
      Listener names and port numbers must be unique unless one listener is an IPv4 address and the other listener is an IPv6 address (for the same port).
      Specify hostname as 0.0.0.0 to bind to all interfaces.
      Leave hostname empty to bind to default interface.
      Examples of legal listener lists:
      PLAINTEXT://myhost:9092,SSL://:9091
      CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093
      PLAINTEXT://127.0.0.1:9092,SSL://[::1]:9092

      Type:string
      Default:PLAINTEXT://:9092
      Valid Values:
      Importance:high
      Update Mode:per-broker
    • log.dir

      The directory in which the log data is kept (supplemental for log.dirs property)

      Type:string
      Default:/tmp/kafka-logs
      Valid Values:
      Importance:high
      Update Mode:read-only
    • log.dirs

      A comma-separated list of the directories where the log data is stored. If not set, the value in log.dir is used.

      Type:string
      Default:null
      Valid Values:
      Importance:high
      Update Mode:read-only
    • log.flush.interval.messages

      The number of messages accumulated on a log partition before messages are flushed to disk.

      Type:long
      Default:9223372036854775807
      Valid Values:[1,...]
      Importance:high
      Update Mode:cluster-wide
    • log.flush.interval.ms

      The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used

      Type:long
      Default:null
      Valid Values:
      Importance:high
      Update Mode:cluster-wide
    • log.flush.offset.checkpoint.interval.ms

      The frequency with which we update the persistent record of the last flush which acts as the log recovery point.

      Type:int
      Default:60000 (1 minute)
      Valid Values:[0,...]
      Importance:high
      Update Mode:read-only
    • log.flush.scheduler.interval.ms

      The frequency in ms that the log flusher checks whether any log needs to be flushed to disk

      Type:long
      Default:9223372036854775807
      Valid Values:
      Importance:high
      Update Mode:read-only
    • log.flush.start.offset.checkpoint.interval.ms

      The frequency with which we update the persistent record of log start offset

      Type:int
      Default:60000 (1 minute)
      Valid Values:[0,...]
      Importance:high
      Update Mode:read-only
    • log.retention.bytes

      The maximum size of the log before deleting it

      Type:long
      Default:-1
      Valid Values:
      Importance:high
      Update Mode:cluster-wide
    • log.retention.hours

      The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property

      Type:int
      Default:168
      Valid Values:
      Importance:high
      Update Mode:read-only
    • log.retention.minutes

      The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used

      Type:int
      Default:null
      Valid Values:
      Importance:high
      Update Mode:read-only
    • log.retention.ms

      The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.

      Type:long
      Default:null
      Valid Values:
      Importance:high
      Update Mode:cluster-wide
    • log.roll.hours

      The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property

      Type:int
      Default:168
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • log.roll.jitter.hours

      The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property

      Type:int
      Default:0
      Valid Values:[0,...]
      Importance:high
      Update Mode:read-only
    • log.roll.jitter.ms

      The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used

      Type:long
      Default:null
      Valid Values:
      Importance:high
      Update Mode:cluster-wide
    • log.roll.ms

      The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used

      Type:long
      Default:null
      Valid Values:
      Importance:high
      Update Mode:cluster-wide
    • log.segment.bytes

      The maximum size of a single log file

      Type:int
      Default:1073741824 (1 gibibyte)
      Valid Values:[1048576,...]
      Importance:high
      Update Mode:cluster-wide
    • log.segment.delete.delay.ms

      The amount of time to wait before deleting a file from the filesystem. If the value is 0 and there is no file to delete, the system will wait 1 millisecond. Low value will cause busy waiting

      Type:long
      Default:60000 (1 minute)
      Valid Values:[0,...]
      Importance:high
      Update Mode:cluster-wide
    • message.max.bytes

      The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.

      Type:int
      Default:1048588
      Valid Values:[0,...]
      Importance:high
      Update Mode:cluster-wide
    • metadata.log.dir

      This configuration determines where we put the metadata log. If it is not set, the metadata log is placed in the first log directory from log.dirs.

      Type:string
      Default:null
      Valid Values:
      Importance:high
      Update Mode:read-only
    • metadata.log.max.record.bytes.between.snapshots

      This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new snapshot. The default value is 20971520. To generate snapshots based on the time elapsed, see the metadata.log.max.snapshot.interval.ms configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.

      Type:long
      Default:20971520
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • metadata.log.max.snapshot.interval.ms

      This is the maximum number of milliseconds to wait to generate a snapshot if there are committed records in the log that are not included in the latest snapshot. A value of zero disables time based snapshot generation. The default value is 3600000. To generate snapshots based on the number of metadata bytes, see the metadata.log.max.record.bytes.between.snapshots configuration. The Kafka node will generate a snapshot when either the maximum time interval is reached or the maximum bytes limit is reached.

      Type:long
      Default:3600000 (1 hour)
      Valid Values:[0,...]
      Importance:high
      Update Mode:read-only
    • metadata.log.segment.bytes

      The maximum size of a single metadata log file.

      Type:int
      Default:1073741824 (1 gibibyte)
      Valid Values:[12,...]
      Importance:high
      Update Mode:read-only
    • metadata.log.segment.ms

      The maximum time before a new metadata log file is rolled out (in milliseconds).

      Type:long
      Default:604800000 (7 days)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • metadata.max.retention.bytes

      The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

      Type:long
      Default:104857600 (100 mebibytes)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • metadata.max.retention.ms

      The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit.

      Type:long
      Default:604800000 (7 days)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • min.insync.replicas

      When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
      Regardless of the acks setting, the messages will not be visible to the consumers until they are replicated to all in-sync replicas and the min.insync.replicas condition is met.
      When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that a majority of replicas must persist a write before it's considered successful by the producer and it's visible to consumers.

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:high
      Update Mode:cluster-wide
    • num.io.threads

      The number of threads that the server uses for processing requests, which may include disk I/O

      Type:int
      Default:8
      Valid Values:[1,...]
      Importance:high
      Update Mode:cluster-wide
    • num.network.threads

      The number of threads that the server uses for receiving requests from the network and sending responses to the network. Noted: each listener (except for controller listener) creates its own thread pool.

      Type:int
      Default:3
      Valid Values:[1,...]
      Importance:high
      Update Mode:cluster-wide
    • num.recovery.threads.per.data.dir

      The number of threads per data directory to be used for log recovery at startup and flushing at shutdown

      Type:int
      Default:2
      Valid Values:[1,...]
      Importance:high
      Update Mode:cluster-wide
    • num.replica.alter.log.dirs.threads

      The number of threads that can move replicas between log directories, which may include disk I/O. The default value is equal to the number of directories specified in the log.dir or log.dirs configuration property.

      Type:int
      Default:null
      Valid Values:
      Importance:high
      Update Mode:read-only
    • num.replica.fetchers

      Number of fetcher threads used to replicate records from each source broker. The total number of fetchers on each broker is bound by num.replica.fetchers multiplied by the number of brokers in the cluster.Increasing this value can increase the degree of I/O parallelism in the follower and leader broker at the cost of higher CPU and memory utilization.

      Type:int
      Default:1
      Valid Values:
      Importance:high
      Update Mode:cluster-wide
    • offset.metadata.max.bytes

      The maximum size for a metadata entry associated with an offset commit.

      Type:int
      Default:4096 (4 kibibytes)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • offsets.commit.timeout.ms

      Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout. This is applied to all the writes made by the coordinator.

      Type:int
      Default:5000 (5 seconds)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • offsets.load.buffer.size

      Batch size for reading from the offsets segments when loading group metadata into the cache (soft-limit, overridden if records are too large).

      Type:int
      Default:5242880
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • offsets.retention.check.interval.ms

      Frequency at which to check for stale offsets

      Type:long
      Default:600000 (10 minutes)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • offsets.retention.minutes

      For subscribed consumers, committed offset of a specific partition will be expired and discarded when 1) this retention period has elapsed after the consumer group loses all its consumers (i.e. becomes empty); 2) this retention period has elapsed since the last time an offset is committed for the partition and the group is no longer subscribed to the corresponding topic. For standalone consumers (using manual assignment), offsets will be expired after this retention period has elapsed since the time of last commit. Note that when a group is deleted via the delete-group request, its committed offsets will also be deleted without extra retention period; also when a topic is deleted via the delete-topic request, upon propagated metadata update any group's committed offsets for that topic will also be deleted without extra retention period.

      Type:int
      Default:10080
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • offsets.topic.compression.codec

      Compression codec for the offsets topic - compression may be used to achieve "atomic" commits.

      Type:int
      Default:0
      Valid Values:
      Importance:high
      Update Mode:read-only
    • offsets.topic.num.partitions

      The number of partitions for the offset commit topic (should not change after deployment).

      Type:int
      Default:50
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • offsets.topic.replication.factor

      The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

      Type:short
      Default:3
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • offsets.topic.segment.bytes

      The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads.

      Type:int
      Default:104857600 (100 mebibytes)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • queued.max.requests

      The number of queued requests allowed for data-plane, before blocking the network threads

      Type:int
      Default:500
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • replica.fetch.min.bytes

      Minimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config).

      Type:int
      Default:1
      Valid Values:
      Importance:high
      Update Mode:read-only
    • replica.fetch.wait.max.ms

      The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics

      Type:int
      Default:500
      Valid Values:
      Importance:high
      Update Mode:read-only
    • replica.high.watermark.checkpoint.interval.ms

      The frequency with which the high watermark is saved out to disk

      Type:long
      Default:5000 (5 seconds)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • replica.lag.time.max.ms

      If a follower hasn't sent any fetch requests or hasn't consumed up to the leader's log end offset for at least this time, the leader will remove the follower from ISR

      Type:long
      Default:30000 (30 seconds)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • replica.socket.receive.buffer.bytes

      The socket receive buffer for network requests to the leader for replicating data

      Type:int
      Default:65536 (64 kibibytes)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • replica.socket.timeout.ms

      The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms

      Type:int
      Default:30000 (30 seconds)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • request.timeout.ms

      The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

      Type:int
      Default:30000 (30 seconds)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • sasl.mechanism.controller.protocol

      SASL mechanism used for communication with controllers. Default is GSSAPI.

      Type:string
      Default:GSSAPI
      Valid Values:
      Importance:high
      Update Mode:read-only
    • share.coordinator.load.buffer.size

      Batch size for reading from the share-group state topic when loading state information into the cache (soft-limit, overridden if records are too large).

      Type:int
      Default:5242880
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • share.coordinator.state.topic.compression.codec

      Compression codec for the share-group state topic.

      Type:int
      Default:0
      Valid Values:
      Importance:high
      Update Mode:read-only
    • share.coordinator.state.topic.min.isr

      Overridden min.insync.replicas for the share-group state topic.

      Type:short
      Default:2
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • share.coordinator.state.topic.num.partitions

      The number of partitions for the share-group state topic (should not change after deployment).

      Type:int
      Default:50
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • share.coordinator.state.topic.replication.factor

      Replication factor for the share-group state topic. Topic creation will fail until the cluster size meets this replication factor requirement.

      Type:short
      Default:3
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • share.coordinator.state.topic.segment.bytes

      The log segment size for the share-group state topic.

      Type:int
      Default:104857600 (100 mebibytes)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • share.coordinator.write.timeout.ms

      The duration in milliseconds that the share coordinator will wait for all replicas of the share-group state topic to receive a write.

      Type:int
      Default:5000 (5 seconds)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • socket.receive.buffer.bytes

      The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

      Type:int
      Default:102400 (100 kibibytes)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • socket.request.max.bytes

      The maximum number of bytes in a socket request

      Type:int
      Default:104857600 (100 mebibytes)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • socket.send.buffer.bytes

      The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.

      Type:int
      Default:102400 (100 kibibytes)
      Valid Values:
      Importance:high
      Update Mode:read-only
    • transaction.max.timeout.ms

      The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.

      Type:int
      Default:900000 (15 minutes)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • transaction.state.log.load.buffer.size

      Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).

      Type:int
      Default:5242880
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • transaction.state.log.min.isr

      The minimum number of replicas that must acknowledge a write to transaction topic in order to be considered successful.

      Type:int
      Default:2
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • transaction.state.log.num.partitions

      The number of partitions for the transaction topic (should not change after deployment).

      Type:int
      Default:50
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • transaction.state.log.replication.factor

      The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.

      Type:short
      Default:3
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • transaction.state.log.segment.bytes

      The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads

      Type:int
      Default:104857600 (100 mebibytes)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • transactional.id.expiration.ms

      The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. Transactional IDs will not expire while a the transaction is still ongoing.

      Type:int
      Default:604800000 (7 days)
      Valid Values:[1,...]
      Importance:high
      Update Mode:read-only
    • unclean.leader.election.enable

      Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss

      Note: In KRaft mode, when enabling this config dynamically, it needs to wait for the unclean leader election thread to trigger election periodically (default is 5 minutes). Please run `kafka-leader-election.sh` with `unclean` option to trigger the unclean leader election immediately if needed.

      Type:boolean
      Default:false
      Valid Values:
      Importance:high
      Update Mode:cluster-wide
    • broker.heartbeat.interval.ms

      The length of time in milliseconds between broker heartbeats.

      Type:int
      Default:2000 (2 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • broker.rack

      Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1, us-east-1d

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • broker.session.timeout.ms

      The length of time in milliseconds that a broker lease lasts if no heartbeats are made.

      Type:int
      Default:9000 (9 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • compression.gzip.level

      The compression level to use if compression.type is set to 'gzip'.

      Type:int
      Default:-1
      Valid Values:[1,...,9] or -1
      Importance:medium
      Update Mode:cluster-wide
    • compression.lz4.level

      The compression level to use if compression.type is set to 'lz4'.

      Type:int
      Default:9
      Valid Values:[1,...,17]
      Importance:medium
      Update Mode:cluster-wide
    • compression.zstd.level

      The compression level to use if compression.type is set to 'zstd'.

      Type:int
      Default:3
      Valid Values:[-131072,...,22]
      Importance:medium
      Update Mode:cluster-wide
    • connections.max.idle.ms

      Idle connections timeout: the server socket processor threads close the connections that idle more than this

      Type:long
      Default:600000 (10 minutes)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • connections.max.reauth.ms

      When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000

      Type:long
      Default:0
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • controlled.shutdown.enable

      Enable controlled shutdown of the server.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • controller.quorum.append.linger.ms

      The duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk.

      Type:int
      Default:25
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • controller.quorum.request.timeout.ms

      The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

      Type:int
      Default:2000 (2 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • controller.socket.timeout.ms

      The socket timeout for controller-to-broker channels.

      Type:int
      Default:30000 (30 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • default.replication.factor

      The replication factor for automatically created topics, and for topics created with -1 as the replication factor

      Type:int
      Default:1
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • delegation.token.expiry.time.ms

      The token validity time in milliseconds before the token needs to be renewed. Default value 1 day.

      Type:long
      Default:86400000 (1 day)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • delegation.token.max.lifetime.ms

      The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.

      Type:long
      Default:604800000 (7 days)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • delegation.token.secret.key

      Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If using Kafka with KRaft, the key must also be set across all controllers. If the key is not set or set to empty string, brokers will disable the delegation token support.

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • delete.records.purgatory.purge.interval.requests

      The purge interval (in number of requests) of the delete records request purgatory

      Type:int
      Default:1
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • fetch.max.bytes

      The maximum number of bytes we will return for a fetch request. Must be at least 1024.

      Type:int
      Default:57671680 (55 mebibytes)
      Valid Values:[1024,...]
      Importance:medium
      Update Mode:read-only
    • fetch.purgatory.purge.interval.requests

      The purge interval (in number of requests) of the fetch request purgatory

      Type:int
      Default:1000
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • group.consumer.assignors

      The server side assignors as a list of either names for builtin assignors or full class names for customer assignors. The first one in the list is considered as the default assignor to be used in the case where the consumer does not specify an assignor. The supported builtin assignors are: uniform, range.

      Type:list
      Default:uniform,range
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • group.consumer.heartbeat.interval.ms

      The heartbeat interval given to the members of a consumer group.

      Type:int
      Default:5000 (5 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.consumer.max.heartbeat.interval.ms

      The maximum heartbeat interval for registered consumers.

      Type:int
      Default:15000 (15 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.consumer.max.session.timeout.ms

      The maximum allowed session timeout for registered consumers.

      Type:int
      Default:60000 (1 minute)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.consumer.max.size

      The maximum number of consumers that a single consumer group can accommodate. This value will only impact groups under the CONSUMER group protocol. To configure the max group size when using the CLASSIC group protocol use group.max.size instead.

      Type:int
      Default:2147483647
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.consumer.migration.policy

      The config that enables converting the non-empty classic group using the consumer embedded protocol to the non-empty consumer group using the consumer group protocol and vice versa; conversions of empty groups in both directions are always enabled regardless of this policy. bidirectional: both upgrade from classic group to consumer group and downgrade from consumer group to classic group are enabled, upgrade: only upgrade from classic group to consumer group is enabled, downgrade: only downgrade from consumer group to classic group is enabled, disabled: neither upgrade nor downgrade is enabled.

      Type:string
      Default:bidirectional
      Valid Values:(case insensitive) [DISABLED, DOWNGRADE, UPGRADE, BIDIRECTIONAL]
      Importance:medium
      Update Mode:read-only
    • group.consumer.min.heartbeat.interval.ms

      The minimum heartbeat interval for registered consumers.

      Type:int
      Default:5000 (5 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.consumer.min.session.timeout.ms

      The minimum allowed session timeout for registered consumers.

      Type:int
      Default:45000 (45 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.consumer.session.timeout.ms

      The timeout to detect client failures when using the consumer group protocol.

      Type:int
      Default:45000 (45 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.coordinator.append.linger.ms

      The duration in milliseconds that the coordinator will wait for writes to accumulate before flushing them to disk. Transactional writes are not accumulated.

      Type:int
      Default:5
      Valid Values:[0,...]
      Importance:medium
      Update Mode:read-only
    • group.coordinator.rebalance.protocols

      The list of enabled rebalance protocols.The share rebalance protocol is in early access and therefore must not be used in production.

      Type:list
      Default:classic,consumer
      Valid Values:[consumer, classic, share]
      Importance:medium
      Update Mode:read-only
    • group.initial.rebalance.delay.ms

      The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.

      Type:int
      Default:3000 (3 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • group.max.session.timeout.ms

      The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.

      Type:int
      Default:1800000 (30 minutes)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • group.max.size

      The maximum number of consumers that a single consumer group can accommodate.

      Type:int
      Default:2147483647
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.min.session.timeout.ms

      The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.

      Type:int
      Default:6000 (6 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • group.share.delivery.count.limit

      The maximum number of delivery attempts for a record delivered to a share group.

      Type:int
      Default:5
      Valid Values:[2,...,10]
      Importance:medium
      Update Mode:read-only
    • group.share.heartbeat.interval.ms

      The heartbeat interval given to the members of a share group.

      Type:int
      Default:5000 (5 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.share.max.groups

      The maximum number of share groups.

      Type:short
      Default:10
      Valid Values:[1,...,100]
      Importance:medium
      Update Mode:read-only
    • group.share.max.heartbeat.interval.ms

      The maximum heartbeat interval for share group members.

      Type:int
      Default:15000 (15 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.share.max.record.lock.duration.ms

      The record acquisition lock maximum duration in milliseconds for share groups.

      Type:int
      Default:60000 (1 minute)
      Valid Values:[30000,...,3600000]
      Importance:medium
      Update Mode:read-only
    • group.share.max.session.timeout.ms

      The maximum allowed session timeout for share group members.

      Type:int
      Default:60000 (1 minute)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.share.max.size

      The maximum number of members that a single share group can accommodate.

      Type:int
      Default:200
      Valid Values:[1,...,1000]
      Importance:medium
      Update Mode:read-only
    • group.share.min.heartbeat.interval.ms

      The minimum heartbeat interval for share group members.

      Type:int
      Default:5000 (5 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.share.min.record.lock.duration.ms

      The record acquisition lock minimum duration in milliseconds for share groups.

      Type:int
      Default:15000 (15 seconds)
      Valid Values:[1000,...,30000]
      Importance:medium
      Update Mode:read-only
    • group.share.min.session.timeout.ms

      The minimum allowed session timeout for share group members.

      Type:int
      Default:45000 (45 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • group.share.partition.max.record.locks

      Share-group record lock limit per share-partition.

      Type:int
      Default:200
      Valid Values:[100,...,10000]
      Importance:medium
      Update Mode:read-only
    • group.share.record.lock.duration.ms

      The record acquisition lock duration in milliseconds for share groups.

      Type:int
      Default:30000 (30 seconds)
      Valid Values:[1000,...,3600000]
      Importance:medium
      Update Mode:read-only
    • group.share.session.timeout.ms

      The timeout to detect client failures when using the share group protocol.

      Type:int
      Default:45000 (45 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • initial.broker.registration.timeout.ms

      When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the broker process.

      Type:int
      Default:60000 (1 minute)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • inter.broker.listener.name

      Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • log.cleaner.backoff.ms

      The amount of time to sleep when there are no logs to clean

      Type:long
      Default:15000 (15 seconds)
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.dedupe.buffer.size

      The total memory used for log deduplication across all cleaner threads

      Type:long
      Default:134217728
      Valid Values:
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.delete.retention.ms

      The amount of time to retain tombstone message markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise tombstones messages may be collected before a consumer completes their scan).

      Type:long
      Default:86400000 (1 day)
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.enable

      Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • log.cleaner.io.buffer.load.factor

      Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions

      Type:double
      Default:0.9
      Valid Values:
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.io.buffer.size

      The total memory used for log cleaner I/O buffers across all cleaner threads

      Type:int
      Default:524288
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.io.max.bytes.per.second

      The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average

      Type:double
      Default:1.7976931348623157E308
      Valid Values:
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.max.compaction.lag.ms

      The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.

      Type:long
      Default:9223372036854775807
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.min.cleanable.ratio

      The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period.

      Type:double
      Default:0.5
      Valid Values:[0,...,1]
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.min.compaction.lag.ms

      The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

      Type:long
      Default:0
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.cleaner.threads

      The number of background threads to use for log cleaning

      Type:int
      Default:1
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.cleanup.policy

      The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies.

      Type:list
      Default:delete
      Valid Values:[compact, delete]
      Importance:medium
      Update Mode:cluster-wide
    • log.index.interval.bytes

      The interval with which we add an entry to the offset index.

      Type:int
      Default:4096 (4 kibibytes)
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.index.size.max.bytes

      The maximum size in bytes of the offset index

      Type:int
      Default:10485760 (10 mebibytes)
      Valid Values:[4,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.local.retention.bytes

      The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. Default value is -2, it represents `log.retention.bytes` value to be used. The effective value should always be less than or equal to `log.retention.bytes` value.

      Type:long
      Default:-2
      Valid Values:[-2,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.local.retention.ms

      The number of milliseconds to keep the local log segments before it gets eligible for deletion. Default value is -2, it represents `log.retention.ms` value is to be used. The effective value should always be less than or equal to `log.retention.ms` value.

      Type:long
      Default:-2
      Valid Values:[-2,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.message.timestamp.after.max.ms

      This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If log.message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.

      Type:long
      Default:3600000 (1 hour)
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.message.timestamp.before.max.ms

      This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If log.message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.

      Type:long
      Default:9223372036854775807
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • log.message.timestamp.type

      Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime.

      Type:string
      Default:CreateTime
      Valid Values:[CreateTime, LogAppendTime]
      Importance:medium
      Update Mode:cluster-wide
    • log.preallocate

      Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.

      Type:boolean
      Default:false
      Valid Values:
      Importance:medium
      Update Mode:cluster-wide
    • log.retention.check.interval.ms

      The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion

      Type:long
      Default:300000 (5 minutes)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • max.connection.creation.rate

      The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connection.creation.rate.Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached.

      Type:int
      Default:2147483647
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • max.connections

      The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connections.per.ip. Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case.

      Type:int
      Default:2147483647
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • max.connections.per.ip

      The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.

      Type:int
      Default:2147483647
      Valid Values:[0,...]
      Importance:medium
      Update Mode:cluster-wide
    • max.connections.per.ip.overrides

      A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200"

      Type:string
      Default:""
      Valid Values:
      Importance:medium
      Update Mode:cluster-wide
    • max.incremental.fetch.session.cache.slots

      The maximum number of total incremental fetch sessions that we will maintain. FetchSessionCache is sharded into 8 shards and the limit is equally divided among all shards. Sessions are allocated to each shard in round-robin. Only entries within a shard are considered eligible for eviction.

      Type:int
      Default:1000
      Valid Values:[0,...]
      Importance:medium
      Update Mode:read-only
    • max.request.partition.size.limit

      The maximum number of partitions can be served in one request.

      Type:int
      Default:2000
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • num.partitions

      The default number of log partitions per topic

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • principal.builder.class

      The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by ssl.principal.mapping.rules applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS. Note that custom implementations of KafkaPrincipalBuilder is required to implement KafkaPrincipalSerde interface, otherwise brokers will not be able to forward requests to the controller.

      Type:class
      Default:org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • producer.purgatory.purge.interval.requests

      The purge interval (in number of requests) of the producer request purgatory

      Type:int
      Default:1000
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • queued.max.request.bytes

      The number of queued bytes allowed before no more requests are read

      Type:long
      Default:-1
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • remote.fetch.max.wait.ms

      The maximum amount of time the server will wait before answering the remote fetch request

      Type:int
      Default:500
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • remote.list.offsets.request.timeout.ms

      The maximum amount of time the server will wait for the remote list offsets request to complete.

      Type:long
      Default:30000 (30 seconds)
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • remote.log.manager.copier.thread.pool.size

      Size of the thread pool used in scheduling tasks to copy segments.

      Type:int
      Default:10
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • remote.log.manager.copy.max.bytes.per.second

      The maximum number of bytes that can be copied from local storage to remote storage per second. This is a global limit for all the partitions that are being copied from local storage to remote storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be copied per second.

      Type:long
      Default:9223372036854775807
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • remote.log.manager.copy.quota.window.num

      The number of samples to retain in memory for remote copy quota management. The default value is 11, which means there are 10 whole windows + 1 current window.

      Type:int
      Default:11
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • remote.log.manager.copy.quota.window.size.seconds

      The time span of each sample for remote copy quota management. The default value is 1 second.

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • remote.log.manager.expiration.thread.pool.size

      Size of the thread pool used in scheduling tasks to clean up the expired remote log segments.

      Type:int
      Default:10
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • remote.log.manager.fetch.max.bytes.per.second

      The maximum number of bytes that can be fetched from remote storage to local storage per second. This is a global limit for all the partitions that are being fetched from remote storage to local storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be fetched per second.

      Type:long
      Default:9223372036854775807
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • remote.log.manager.fetch.quota.window.num

      The number of samples to retain in memory for remote fetch quota management. The default value is 11, which means there are 10 whole windows + 1 current window.

      Type:int
      Default:11
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • remote.log.manager.fetch.quota.window.size.seconds

      The time span of each sample for remote fetch quota management. The default value is 1 second.

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • remote.log.manager.thread.pool.size

      Size of the thread pool used in scheduling follower tasks to read the highest-uploaded remote-offset for follower partitions.

      Type:int
      Default:2
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • remote.log.metadata.manager.class.name

      Fully qualified class name of `RemoteLogMetadataManager` implementation.

      Type:string
      Default:org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
      Valid Values:non-empty string
      Importance:medium
      Update Mode:read-only
    • remote.log.metadata.manager.class.path

      Class path of the `RemoteLogMetadataManager` implementation. If specified, the RemoteLogMetadataManager implementation and its dependent libraries will be loaded by a dedicated classloader which searches this class path before the Kafka broker class path. The syntax of this parameter is same as the standard Java class path string.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • remote.log.metadata.manager.impl.prefix

      Prefix used for properties to be passed to RemoteLogMetadataManager implementation. For example this value can be `rlmm.config.`.

      Type:string
      Default:rlmm.config.
      Valid Values:non-empty string
      Importance:medium
      Update Mode:read-only
    • remote.log.metadata.manager.listener.name

      Listener name of the local broker to which it should get connected if needed by RemoteLogMetadataManager implementation.

      Type:string
      Default:null
      Valid Values:non-empty string
      Importance:medium
      Update Mode:read-only
    • remote.log.reader.max.pending.tasks

      Maximum remote log reader thread pool task queue size. If the task queue is full, fetch requests are served with an error.

      Type:int
      Default:100
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • remote.log.reader.threads

      Size of the thread pool that is allocated for handling remote log reads.

      Type:int
      Default:10
      Valid Values:[1,...]
      Importance:medium
      Update Mode:cluster-wide
    • remote.log.storage.manager.class.name

      Fully qualified class name of `RemoteStorageManager` implementation.

      Type:string
      Default:null
      Valid Values:non-empty string
      Importance:medium
      Update Mode:read-only
    • remote.log.storage.manager.class.path

      Class path of the `RemoteStorageManager` implementation. If specified, the RemoteStorageManager implementation and its dependent libraries will be loaded by a dedicated classloader which searches this class path before the Kafka broker class path. The syntax of this parameter is same as the standard Java class path string.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • remote.log.storage.manager.impl.prefix

      Prefix used for properties to be passed to RemoteStorageManager implementation. For example this value can be `rsm.config.`.

      Type:string
      Default:rsm.config.
      Valid Values:non-empty string
      Importance:medium
      Update Mode:read-only
    • remote.log.storage.system.enable

      Whether to enable tiered storage functionality in a broker or not. When it is true broker starts all the services required for the tiered storage functionality.

      Type:boolean
      Default:false
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • replica.fetch.backoff.ms

      The amount of time to sleep when fetch partition error occurs.

      Type:int
      Default:1000 (1 second)
      Valid Values:[0,...]
      Importance:medium
      Update Mode:read-only
    • replica.fetch.max.bytes

      The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).

      Type:int
      Default:1048576 (1 mebibyte)
      Valid Values:[0,...]
      Importance:medium
      Update Mode:read-only
    • replica.fetch.response.max.bytes

      Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).

      Type:int
      Default:10485760 (10 mebibytes)
      Valid Values:[0,...]
      Importance:medium
      Update Mode:read-only
    • replica.selector.class

      The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • sasl.client.callback.handler.class

      The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

      Type:class
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • sasl.enabled.mechanisms

      The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.

      Type:list
      Default:GSSAPI
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.jaas.config

      JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.kerberos.kinit.cmd

      Kerberos kinit command path.

      Type:string
      Default:/usr/bin/kinit
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.kerberos.min.time.before.relogin

      Login thread sleep time between refresh attempts.

      Type:long
      Default:60000
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.kerberos.principal.to.local.rules

      A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration.

      Type:list
      Default:DEFAULT
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.kerberos.service.name

      The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.kerberos.ticket.renew.jitter

      Percentage of random jitter added to the renewal time.

      Type:double
      Default:0.05
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.kerberos.ticket.renew.window.factor

      Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

      Type:double
      Default:0.8
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.login.callback.handler.class

      The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

      Type:class
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • sasl.login.class

      The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

      Type:class
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • sasl.login.refresh.buffer.seconds

      The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

      Type:short
      Default:300
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.login.refresh.min.period.seconds

      The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

      Type:short
      Default:60
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.login.refresh.window.factor

      Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

      Type:double
      Default:0.8
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.login.refresh.window.jitter

      The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

      Type:double
      Default:0.05
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.mechanism.inter.broker.protocol

      SASL mechanism used for inter-broker communication. Default is GSSAPI.

      Type:string
      Default:GSSAPI
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • sasl.oauthbearer.jwks.endpoint.url

      The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • sasl.oauthbearer.token.endpoint.url

      The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • sasl.server.callback.handler.class

      The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler.

      Type:class
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • sasl.server.max.receive.size

      The maximum receive size allowed before and during initial SASL authentication. Default receive size is 512KB. GSSAPI limits requests to 64K, but we allow upto 512KB by default for custom SASL mechanisms. In practice, PLAIN, SCRAM and OAUTH mechanisms can use much smaller limits.

      Type:int
      Default:524288
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • security.inter.broker.protocol

      Security protocol used to communicate between brokers. It is an error to set this and inter.broker.listener.name properties at the same time.

      Type:string
      Default:PLAINTEXT
      Valid Values:[PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL]
      Importance:medium
      Update Mode:read-only
    • share.coordinator.append.linger.ms

      The duration in milliseconds that the share coordinator will wait for writes to accumulate before flushing them to disk.

      Type:int
      Default:10
      Valid Values:[0,...]
      Importance:medium
      Update Mode:read-only
    • share.coordinator.snapshot.update.records.per.snapshot

      The number of update records the share coordinator writes between snapshot records.

      Type:int
      Default:500
      Valid Values:[0,...]
      Importance:medium
      Update Mode:read-only
    • share.coordinator.threads

      The number of threads used by the share coordinator.

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • share.fetch.purgatory.purge.interval.requests

      The purge interval (in number of requests) of the share fetch request purgatory

      Type:int
      Default:1000
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • socket.connection.setup.timeout.max.ms

      The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

      Type:long
      Default:30000 (30 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • socket.connection.setup.timeout.ms

      The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value.

      Type:long
      Default:10000 (10 seconds)
      Valid Values:
      Importance:medium
      Update Mode:read-only
    • socket.listen.backlog.size

      The maximum number of pending connections on the socket. In Linux, you may also need to configure somaxconn and tcp_max_syn_backlog kernel parameters accordingly to make the configuration takes effect.

      Type:int
      Default:50
      Valid Values:[1,...]
      Importance:medium
      Update Mode:read-only
    • ssl.cipher.suites

      A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

      Type:list
      Default:""
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.client.auth

      Configures kafka broker to request client authentication. The following settings are common:

      • ssl.client.auth=required If set to required client authentication is required.
      • ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
      • ssl.client.auth=none This means client authentication is not needed.

      Type:string
      Default:none
      Valid Values:[required, requested, none]
      Importance:medium
      Update Mode:per-broker
    • ssl.enabled.protocols

      The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3'. This means that clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most use cases. Also see the config documentation for `ssl.protocol` to understand how it can impact the TLS version negotiation behavior.

      Type:list
      Default:TLSv1.2,TLSv1.3
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.key.password

      The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.keymanager.algorithm

      The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

      Type:string
      Default:SunX509
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.keystore.certificate.chain

      Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.keystore.key

      Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.keystore.location

      The location of the key store file. This is optional for client and can be used for two-way authentication for client.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.keystore.password

      The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.keystore.type

      The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

      Type:string
      Default:JKS
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.protocol

      The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3', which should be fine for most use cases. A typical alternative to the default is 'TLSv1.2'. Allowed values for this config are dependent on the JVM. Clients using the defaults for this config and 'ssl.enabled.protocols' will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', however, clients will not use 'TLSv1.3' even if it is one of the values in `ssl.enabled.protocols` and the server only supports 'TLSv1.3'.

      Type:string
      Default:TLSv1.3
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.provider

      The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.trustmanager.algorithm

      The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

      Type:string
      Default:PKIX
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.truststore.certificates

      Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.truststore.location

      The location of the trust store file.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.truststore.password

      The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

      Type:password
      Default:null
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • ssl.truststore.type

      The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

      Type:string
      Default:JKS
      Valid Values:
      Importance:medium
      Update Mode:per-broker
    • alter.config.policy.class.name

      The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface.

      Note: This policy runs on the controller instead of the broker.

      Type:class
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • alter.log.dirs.replication.quota.window.num

      The number of samples to retain in memory for alter log dirs replication quotas

      Type:int
      Default:11
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • alter.log.dirs.replication.quota.window.size.seconds

      The time span of each sample for alter log dirs replication quotas

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • authorizer.class.name

      The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization.

      Type:string
      Default:""
      Valid Values:non-null string
      Importance:low
      Update Mode:read-only
    • client.quota.callback.class

      The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied.

      Type:class
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • connection.failed.authentication.delay.ms

      Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout.

      Type:int
      Default:100
      Valid Values:[0,...]
      Importance:low
      Update Mode:read-only
    • controller.quorum.retry.backoff.ms

      The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.

      Type:int
      Default:20
      Valid Values:
      Importance:low
      Update Mode:read-only
    • controller.quota.window.num

      The number of samples to retain in memory for controller mutation quotas

      Type:int
      Default:11
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • controller.quota.window.size.seconds

      The time span of each sample for controller mutations quotas

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • create.topic.policy.class.name

      The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface.

      Note: This policy runs on the controller instead of the broker.

      Type:class
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • delegation.token.expiry.check.interval.ms

      Scan interval to remove expired delegation tokens.

      Type:long
      Default:3600000 (1 hour)
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • kafka.metrics.polling.interval.secs

      The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations.

      Type:int
      Default:10
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • kafka.metrics.reporters

      A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends kafka.metrics.KafkaMetricsReporterMBean trait so that the registered MBean is compliant with the standard MBean convention.

      Type:list
      Default:""
      Valid Values:
      Importance:low
      Update Mode:read-only
    • listener.security.protocol.map

      Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location). Note that in KRaft a default mapping from the listener names defined by controller.listener.names to PLAINTEXT is assumed if no explicit mapping is provided and no other security protocol is in use.

      Type:string
      Default:SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT
      Valid Values:
      Importance:low
      Update Mode:per-broker
    • log.dir.failure.timeout.ms

      If the broker is unable to successfully communicate to the controller that some log directory has failed for longer than this time, the broker will fail and shut down.

      Type:long
      Default:30000 (30 seconds)
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • metadata.max.idle.interval.ms

      This configuration controls how often the active controller should write no-op records to the metadata partition. If the value is 0, no-op records are not appended to the metadata partition. The default value is 500

      Type:int
      Default:500
      Valid Values:[0,...]
      Importance:low
      Update Mode:read-only
    • metric.reporters

      A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation.

      Type:list
      Default:org.apache.kafka.common.metrics.JmxReporter
      Valid Values:
      Importance:low
      Update Mode:cluster-wide
    • metrics.num.samples

      The number of samples maintained to compute metrics.

      Type:int
      Default:2
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • metrics.recording.level

      The highest recording level for metrics. It has three levels for recording metrics - info, debug, and trace.

      INFO level records only essential metrics necessary for monitoring system performance and health. It collects vital data without gathering too much detail, making it suitable for production environments where minimal overhead is desired.

      DEBUG level records most metrics, providing more detailed information about the system's operation. It's useful for development and testing environments where you need deeper insights to debug and fine-tune the application.

      TRACE level records all possible metrics, capturing every detail about the system's performance and operation. It's best for controlled environments where in-depth analysis is required, though it can introduce significant overhead.

      Type:string
      Default:INFO
      Valid Values:
      Importance:low
      Update Mode:read-only
    • metrics.sample.window.ms

      The window of time a metrics sample is computed over.

      Type:long
      Default:30000 (30 seconds)
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • producer.id.expiration.ms

      The time in ms that a topic partition leader will wait before expiring producer IDs. Producer IDs will not expire while a transaction associated to them is still ongoing. Note that producer IDs may expire sooner if the last write from the producer ID is deleted due to the topic's retention settings. Setting this value the same or higher than delivery.timeout.ms can help prevent expiration during retries and protect against message duplication, but the default should be reasonable for most use cases.

      Type:int
      Default:86400000 (1 day)
      Valid Values:[1,...]
      Importance:low
      Update Mode:cluster-wide
    • quota.window.num

      The number of samples to retain in memory for client quotas

      Type:int
      Default:11
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • quota.window.size.seconds

      The time span of each sample for client quotas

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • remote.log.index.file.cache.total.size.bytes

      The total size of the space allocated to store index files fetched from remote storage in the local storage.

      Type:long
      Default:1073741824 (1 gibibyte)
      Valid Values:[1,...]
      Importance:low
      Update Mode:cluster-wide
    • remote.log.manager.task.interval.ms

      Interval at which remote log manager runs the scheduled tasks like copy segments, and clean up remote log segments.

      Type:long
      Default:30000 (30 seconds)
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • remote.log.metadata.custom.metadata.max.bytes

      The maximum size of custom metadata in bytes that the broker should accept from a remote storage plugin. If custom metadata exceeds this limit, the updated segment metadata will not be stored, the copied data will be attempted to delete, and the remote copying task for this topic-partition will stop with an error.

      Type:int
      Default:128
      Valid Values:[0,...]
      Importance:low
      Update Mode:read-only
    • replication.quota.window.num

      The number of samples to retain in memory for replication quotas

      Type:int
      Default:11
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • replication.quota.window.size.seconds

      The time span of each sample for replication quotas

      Type:int
      Default:1
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • sasl.login.connect.timeout.ms

      The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

      Type:int
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.login.read.timeout.ms

      The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

      Type:int
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.login.retry.backoff.max.ms

      The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

      Type:long
      Default:10000 (10 seconds)
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.login.retry.backoff.ms

      The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

      Type:long
      Default:100
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.clock.skew.seconds

      The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

      Type:int
      Default:30
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.expected.audience

      The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

      Type:list
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.expected.issuer

      The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

      Type:string
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.jwks.endpoint.refresh.ms

      The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

      Type:long
      Default:3600000 (1 hour)
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

      The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

      Type:long
      Default:10000 (10 seconds)
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

      The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

      Type:long
      Default:100
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.scope.claim.name

      The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

      Type:string
      Default:scope
      Valid Values:
      Importance:low
      Update Mode:read-only
    • sasl.oauthbearer.sub.claim.name

      The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

      Type:string
      Default:sub
      Valid Values:
      Importance:low
      Update Mode:read-only
    • security.providers

      A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface.

      Type:string
      Default:null
      Valid Values:
      Importance:low
      Update Mode:read-only
    • ssl.allow.dn.changes

      Indicates whether changes to the certificate distinguished name should be allowed during a dynamic reconfiguration of certificates or not.

      Type:boolean
      Default:false
      Valid Values:
      Importance:low
      Update Mode:read-only
    • ssl.allow.san.changes

      Indicates whether changes to the certificate subject alternative names should be allowed during a dynamic reconfiguration of certificates or not.

      Type:boolean
      Default:false
      Valid Values:
      Importance:low
      Update Mode:read-only
    • ssl.endpoint.identification.algorithm

      The endpoint identification algorithm to validate server hostname using server certificate.

      Type:string
      Default:https
      Valid Values:
      Importance:low
      Update Mode:per-broker
    • ssl.engine.factory.class

      The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one.

      Type:class
      Default:null
      Valid Values:
      Importance:low
      Update Mode:per-broker
    • ssl.principal.mapping.rules

      A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration.

      Type:string
      Default:DEFAULT
      Valid Values:
      Importance:low
      Update Mode:read-only
    • ssl.secure.random.implementation

      The SecureRandom PRNG implementation to use for SSL cryptography operations.

      Type:string
      Default:null
      Valid Values:
      Importance:low
      Update Mode:per-broker
    • telemetry.max.bytes

      The maximum size (after compression if compression is used) of telemetry metrics pushed from a client to the broker. The default value is 1048576 (1 MB).

      Type:int
      Default:1048576 (1 mebibyte)
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • transaction.abort.timed.out.transaction.cleanup.interval.ms

      The interval at which to rollback transactions that have timed out

      Type:int
      Default:10000 (10 seconds)
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only
    • transaction.partition.verification.enable

      Enable verification that checks that the partition has been added to the transaction before writing transactional records to the partition

      Type:boolean
      Default:true
      Valid Values:
      Importance:low
      Update Mode:cluster-wide
    • transaction.remove.expired.transaction.cleanup.interval.ms

      The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing

      Type:int
      Default:3600000 (1 hour)
      Valid Values:[1,...]
      Importance:low
      Update Mode:read-only

More details about broker configuration can be found in the scala class kafka.server.KafkaConfig.

Updating Broker Configs

From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker. See the Dynamic Update Mode column in Broker Configs for the update mode of each broker config.

  • read-only: Requires a broker restart for update
  • per-broker: May be updated dynamically for each broker
  • cluster-wide: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.

To alter the current broker configs for broker id 0 (for example, the number of log cleaner threads):

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2

To describe the current dynamic broker configs for broker id 0:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe

To delete a config override and revert to the statically configured or default value for broker id 0 (for example, the number of log cleaner threads):

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads

Some configs may be configured as a cluster-wide default to maintain consistent values across the whole cluster. All brokers in the cluster will process the cluster default update. For example, to update log cleaner threads on all brokers:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2

To describe the currently configured dynamic cluster-wide default configs:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe

All configs that are configurable at cluster level may also be configured at per-broker level (e.g. for testing). If a config value is defined at different levels, the following order of precedence is used:

  • Dynamic per-broker config stored in the metadata log
  • Dynamic cluster-wide default config stored in the metadata log
  • Static broker config from server.properties
  • Kafka default, see broker configs

Updating SSL Keystore of an Existing Listener

Brokers may be configured with SSL keystores with short validity periods to reduce the risk of compromised certificates. Keystores may be updated dynamically without restarting the broker. The config name must be prefixed with the listener prefix listener.name.{listenerName}. so that only the keystore config of a specific listener is updated. The following configs may be updated in a single alter request at per-broker level:

  • ssl.keystore.type
  • ssl.keystore.location
  • ssl.keystore.password
  • ssl.key.password

If the listener is the inter-broker listener, the update is allowed only if the new keystore is trusted by the truststore configured for that listener. For other listeners, no trust validation is performed on the keystore by the broker. Certificates must be signed by the same certificate authority that signed the old certificate to avoid any client authentication failures.

Updating SSL Truststore of an Existing Listener

Broker truststores may be updated dynamically without restarting the broker to add or remove certificates. Updated truststore will be used to authenticate new client connections. The config name must be prefixed with the listener prefix listener.name.{listenerName}. so that only the truststore config of a specific listener is updated. The following configs may be updated in a single alter request at per-broker level:

  • ssl.truststore.type
  • ssl.truststore.location
  • ssl.truststore.password

If the listener is the inter-broker listener, the update is allowed only if the existing keystore for that listener is trusted by the new truststore. For other listeners, no trust validation is performed by the broker before the update. Removal of CA certificates used to sign client certificates from the new truststore can lead to client authentication failures.

Updating Default Topic Configuration

Default topic configuration options used by brokers may be updated without broker restart. The configs are applied to topics without a topic config override for the equivalent per-topic config. One or more of these configs may be overridden at cluster-default level used by all brokers.

  • log.segment.bytes
  • log.roll.ms
  • log.roll.hours
  • log.roll.jitter.ms
  • log.roll.jitter.hours
  • log.index.size.max.bytes
  • log.flush.interval.messages
  • log.flush.interval.ms
  • log.retention.bytes
  • log.retention.ms
  • log.retention.minutes
  • log.retention.hours
  • log.index.interval.bytes
  • log.cleaner.delete.retention.ms
  • log.cleaner.min.compaction.lag.ms
  • log.cleaner.max.compaction.lag.ms
  • log.cleaner.min.cleanable.ratio
  • log.cleanup.policy
  • log.segment.delete.delay.ms
  • unclean.leader.election.enable
  • min.insync.replicas
  • max.message.bytes
  • compression.type
  • log.preallocate
  • log.message.timestamp.type

Updating Log Cleaner Configs

Log cleaner configs may be updated dynamically at cluster-default level used by all brokers. The changes take effect on the next iteration of log cleaning. One or more of these configs may be updated:

  • log.cleaner.threads
  • log.cleaner.io.max.bytes.per.second
  • log.cleaner.dedupe.buffer.size
  • log.cleaner.io.buffer.size
  • log.cleaner.io.buffer.load.factor
  • log.cleaner.backoff.ms

Updating Thread Configs

The size of various thread pools used by the broker may be updated dynamically at cluster-default level used by all brokers. Updates are restricted to the range currentSize / 2 to currentSize * 2 to ensure that config updates are handled gracefully.

  • num.network.threads
  • num.io.threads
  • num.replica.fetchers
  • num.recovery.threads.per.data.dir
  • log.cleaner.threads
  • background.threads
  • remote.log.reader.threads
  • remote.log.manager.copier.thread.pool.size
  • remote.log.manager.expiration.thread.pool.size

Updating ConnectionQuota Configs

The maximum number of connections allowed for a given IP/host by the broker may be updated dynamically at cluster-default level used by all brokers. The changes will apply for new connection creations and the existing connections count will be taken into account by the new limits.

  • max.connections.per.ip
  • max.connections.per.ip.overrides

Adding and Removing Listeners

Listeners may be added or removed dynamically. When a new listener is added, security configs of the listener must be provided as listener configs with the listener prefix listener.name.{listenerName}.. If the new listener uses SASL, the JAAS configuration of the listener must be provided using the JAAS configuration property sasl.jaas.config with the listener and mechanism prefix. See JAAS configuration for Kafka brokers for details.

In Kafka version 1.1.x, the listener used by the inter-broker listener may not be updated dynamically. To update the inter-broker listener to a new listener, the new listener may be added on all brokers without restarting the broker. A rolling restart is then required to update inter.broker.listener.name.

In addition to all the security configs of new listeners, the following configs may be updated dynamically at per-broker level:

  • listeners
  • advertised.listeners
  • listener.security.protocol.map

Inter-broker listener must be configured using the static broker configuration inter.broker.listener.name or security.inter.broker.protocol.

Topic-Level Configs

Configurations pertinent to topics have both a server default as well an optional per-topic override. If no per-topic configuration is given the server default is used. The override can be set at topic creation time by giving one or more --config options. This example creates a topic named my-topic with a custom max message size and flush rate:

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 \
  --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1

Overrides can also be changed or set later using the alter configs command. This example updates the max message size for my-topic :

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic
  --alter --add-config max.message.bytes=128000

To check overrides set on the topic you can do

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my-topic --describe

To remove an override you can do

$ bin/kafka-configs.sh --bootstrap-server localhost:9092  --entity-type topics --entity-name my-topic
  --alter --delete-config max.message.bytes

The following are the topic-level configurations. The server’s default configuration for this property is given under the Server Default Property heading. A given server default config value only applies to a topic if it does not have an explicit topic config override.

  • cleanup.policy

    This config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old segments when their retention time or size limit has been reached. The "compact" policy will enable log compaction, which retains the latest value for each key. It is also possible to specify both policies in a comma-separated list (e.g. "delete,compact"). In this case, old segments will be discarded per the retention time and size configuration, while retained segments will be compacted.

    Type:list
    Default:delete
    Valid Values:[compact, delete]
    Server Default Property:log.cleanup.policy
    Importance:medium
  • compression.gzip.level

    The compression level to use if compression.type is set to gzip.

    Type:int
    Default:-1
    Valid Values:[1,...,9] or -1
    Server Default Property:compression.gzip.level
    Importance:medium
  • compression.lz4.level

    The compression level to use if compression.type is set to lz4.

    Type:int
    Default:9
    Valid Values:[1,...,17]
    Server Default Property:compression.lz4.level
    Importance:medium
  • compression.type

    Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.

    Type:string
    Default:producer
    Valid Values:[uncompressed, zstd, lz4, snappy, gzip, producer]
    Server Default Property:compression.type
    Importance:medium
  • compression.zstd.level

    The compression level to use if compression.type is set to zstd.

    Type:int
    Default:3
    Valid Values:[-131072,...,22]
    Server Default Property:compression.zstd.level
    Importance:medium
  • delete.retention.ms

    The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).

    Type:long
    Default:86400000 (1 day)
    Valid Values:[0,...]
    Server Default Property:log.cleaner.delete.retention.ms
    Importance:medium
  • file.delete.delay.ms

    The time to wait before deleting a file from the filesystem

    Type:long
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Server Default Property:log.segment.delete.delay.ms
    Importance:medium
  • flush.messages

    This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section).

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Server Default Property:log.flush.interval.messages
    Importance:medium
  • flush.ms

    This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.

    Type:long
    Default:9223372036854775807
    Valid Values:[0,...]
    Server Default Property:log.flush.interval.ms
    Importance:medium
  • follower.replication.throttled.replicas

    A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.

    Type:list
    Default:""
    Valid Values:[partitionId]:[brokerId],[partitionId]:[brokerId],...
    Server Default Property:null
    Importance:medium
  • index.interval.bytes

    This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.

    Type:int
    Default:4096 (4 kibibytes)
    Valid Values:[0,...]
    Server Default Property:log.index.interval.bytes
    Importance:medium
  • leader.replication.throttled.replicas

    A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.

    Type:list
    Default:""
    Valid Values:[partitionId]:[brokerId],[partitionId]:[brokerId],...
    Server Default Property:null
    Importance:medium
  • local.retention.bytes

    The maximum size of local log segments that can grow for a partition before it deletes the old segments. Default value is -2, it represents `retention.bytes` value to be used. The effective value should always be less than or equal to `retention.bytes` value.

    Type:long
    Default:-2
    Valid Values:[-2,...]
    Server Default Property:log.local.retention.bytes
    Importance:medium
  • local.retention.ms

    The number of milliseconds to keep the local log segment before it gets deleted. Default value is -2, it represents `retention.ms` value is to be used. The effective value should always be less than or equal to `retention.ms` value.

    Type:long
    Default:-2
    Valid Values:[-2,...]
    Server Default Property:log.local.retention.ms
    Importance:medium
  • max.compaction.lag.ms

    The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Server Default Property:log.cleaner.max.compaction.lag.ms
    Importance:medium
  • max.message.bytes

    The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.

    Type:int
    Default:1048588
    Valid Values:[0,...]
    Server Default Property:message.max.bytes
    Importance:medium
  • message.timestamp.after.max.ms

    This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if message.timestamp.type=LogAppendTime.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:[0,...]
    Server Default Property:log.message.timestamp.after.max.ms
    Importance:medium
  • message.timestamp.before.max.ms

    This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if message.timestamp.type=LogAppendTime.

    Type:long
    Default:9223372036854775807
    Valid Values:[0,...]
    Server Default Property:log.message.timestamp.before.max.ms
    Importance:medium
  • message.timestamp.type

    Define whether the timestamp in the message is message create time or log append time.

    Type:string
    Default:CreateTime
    Valid Values:[CreateTime, LogAppendTime]
    Server Default Property:log.message.timestamp.type
    Importance:medium
  • min.cleanable.dirty.ratio

    This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period.

    Type:double
    Default:0.5
    Valid Values:[0,...,1]
    Server Default Property:log.cleaner.min.cleanable.ratio
    Importance:medium
  • min.compaction.lag.ms

    The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.

    Type:long
    Default:0
    Valid Values:[0,...]
    Server Default Property:log.cleaner.min.compaction.lag.ms
    Importance:medium
  • min.insync.replicas

    When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
    Regardless of the acks setting, the messages will not be visible to the consumers until they are replicated to all in-sync replicas and the min.insync.replicas condition is met.
    When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that a majority of replicas must persist a write before it's considered successful by the producer and it's visible to consumers.

    Type:int
    Default:1
    Valid Values:[1,...]
    Server Default Property:min.insync.replicas
    Importance:medium
  • preallocate

    True if we should preallocate the file on disk when creating a new log segment.

    Type:boolean
    Default:false
    Valid Values:
    Server Default Property:log.preallocate
    Importance:medium
  • remote.log.copy.disable

    Determines whether tiered data for a topic should become read only, and no more data uploading on a topic. Once this config is set to true, the local retention configuration (i.e. local.retention.ms/bytes) becomes irrelevant, and all data expiration follows the topic-wide retention configuration(i.e. retention.ms/bytes).

    Type:boolean
    Default:false
    Valid Values:
    Server Default Property:null
    Importance:medium
  • remote.log.delete.on.disable

    Determines whether tiered data for a topic should be deleted after tiered storage is disabled on a topic. This configuration should be enabled when trying to set `remote.storage.enable` from true to false

    Type:boolean
    Default:false
    Valid Values:
    Server Default Property:null
    Importance:medium
  • remote.storage.enable

    To enable tiered storage for a topic, set this configuration as true. You can not disable this config once it is enabled. It will be provided in future versions.

    Type:boolean
    Default:false
    Valid Values:
    Server Default Property:null
    Importance:medium
  • retention.bytes

    This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes. Additionally, retention.bytes configuration operates independently of "segment.ms" and "segment.bytes" configurations. Moreover, it triggers the rolling of new segment if the retention.bytes is configured to zero.

    Type:long
    Default:-1
    Valid Values:
    Server Default Property:log.retention.bytes
    Importance:medium
  • retention.ms

    This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied. Additionally, retention.ms configuration operates independently of "segment.ms" and "segment.bytes" configurations. Moreover, it triggers the rolling of new segment if the retention.ms condition is satisfied.

    Type:long
    Default:604800000 (7 days)
    Valid Values:[-1,...]
    Server Default Property:log.retention.ms
    Importance:medium
  • segment.bytes

    This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.

    Type:int
    Default:1073741824 (1 gibibyte)
    Valid Values:[14,...]
    Server Default Property:log.segment.bytes
    Importance:medium
  • segment.index.bytes

    This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.

    Type:int
    Default:10485760 (10 mebibytes)
    Valid Values:[4,...]
    Server Default Property:log.index.size.max.bytes
    Importance:medium
  • segment.jitter.ms

    The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling

    Type:long
    Default:0
    Valid Values:[0,...]
    Server Default Property:log.roll.jitter.ms
    Importance:medium
  • segment.ms

    This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.

    Type:long
    Default:604800000 (7 days)
    Valid Values:[1,...]
    Server Default Property:log.roll.ms
    Importance:medium
  • unclean.leader.election.enable

    Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.

    Note: In KRaft mode, when enabling this config dynamically, it needs to wait for the unclean leader electionthread to trigger election periodically (default is 5 minutes). Please run `kafka-leader-election.sh` with `unclean` option to trigger the unclean leader election immediately if needed.

    Type:boolean
    Default:false
    Valid Values:
    Server Default Property:unclean.leader.election.enable
    Importance:medium

Producer Configs

Below is the configuration of the producer:

  • key.serializer

    Serializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • value.serializer

    Serializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs used to establish the initial connection to the Kafka cluster. Clients use this list to bootstrap and discover the full set of Kafka brokers. While the order of servers in the list does not matter, we recommend including more than one server to ensure resilience if any servers are down. This list does not need to contain the entire set of brokers, as Kafka clients automatically manage and update connections to the cluster efficiently. This list must be in the form host1:port1,host2:port2,....

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:high
  • buffer.memory

    The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for max.block.ms after which it will throw an exception.

    This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.

    Type:long
    Default:33554432
    Valid Values:[0,...]
    Importance:high
  • compression.type

    The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid values are none, gzip, snappy, lz4, or zstd. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).

    Type:string
    Default:none
    Valid Values:[none, gzip, snappy, lz4, zstd]
    Importance:high
  • retries

    Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Produce requests will be failed before the number of retries has been exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. Users should generally prefer to leave this config unset and instead use delivery.timeout.ms to control retry behavior.

    Enabling idempotence requires this config value to be greater than 0. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.

    Allowing retries while setting enable.idempotence to false and max.in.flight.requests.per.connection to greater than 1 will potentially change the ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second succeeds, then the records in the second batch may appear first.

    Type:int
    Default:2147483647
    Valid Values:[0,...,2147483647]
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • batch.size

    The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes.

    No attempt will be made to batch records larger than this size.

    Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent.

    A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.

    Note: This setting gives the upper bound of the batch size to be sent. If we have fewer than this many bytes accumulated for this partition, we will 'linger' for the linger.ms time waiting for more records to show up. This linger.ms setting defaults to 5, which means the producer will wait for 5ms or until the record batch is of batch.size(whichever happens first) before sending the record batch. Note that broker backpressure can result in a higher effective linger time than this setting.The default changed from 0 to 5 in Apache Kafka 4.0 as the efficiency gains from larger batches typically result in similar or lower producer latency despite the increased linger.

    Type:int
    Default:16384
    Valid Values:[0,...]
    Importance:medium
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips.

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • compression.gzip.level

    The compression level to use if compression.type is set to gzip.

    Type:int
    Default:-1
    Valid Values:[1,...,9] or -1
    Importance:medium
  • compression.lz4.level

    The compression level to use if compression.type is set to lz4.

    Type:int
    Default:9
    Valid Values:[1,...,17]
    Importance:medium
  • compression.zstd.level

    The compression level to use if compression.type is set to zstd.

    Type:int
    Default:3
    Valid Values:[-131072,...,22]
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:medium
  • delivery.timeout.ms

    An upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of request.timeout.ms and linger.ms.

    Type:int
    Default:120000 (2 minutes)
    Valid Values:[0,...]
    Importance:medium
  • linger.ms

    The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay—that is, rather than immediately sending out a record, the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 5 (i.e. 5ms delay). Increasing linger.ms=50, for example, would have the effect of reducing the number of requests sent but would add up to 50ms of latency to records sent in the absence of load.The default changed from 0 to 5 in Apache Kafka 4.0 as the efficiency gains from larger batches typically result in similar or lower producer latency despite the increased linger.

    Type:long
    Default:5
    Valid Values:[0,...]
    Importance:medium
  • max.block.ms

    The configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), commitTransaction() and abortTransaction() methods will block. For send() this timeout bounds the total time waiting for both metadata fetch and buffer allocation (blocking in the user-supplied serializers or partitioner is not counted against this timeout). For partitionsFor() this timeout bounds the time spent waiting for metadata if it is unavailable. The transaction-related methods always block, but may timeout if the transaction coordinator could not be discovered or did not respond within the timeout.

    Type:long
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:medium
  • max.request.size

    The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.

    Type:int
    Default:1048576
    Valid Values:[0,...]
    Importance:medium
  • partitioner.class

    Determines which partition to send a record to when records are produced. Available options are:

    • If not set, the default partitioning logic is used. This strategy send records to a partition until at least batch.size bytes is produced to the partition. It works with the strategy:
      1. If no partition is specified but a key is present, choose a partition based on a hash of the key.
      2. If no partition or key is present, choose the sticky partition that changes when at least batch.size bytes are produced to the partition.
    • org.apache.kafka.clients.producer.RoundRobinPartitioner: A partitioning strategy where each record in a series of consecutive records is sent to a different partition, regardless of whether the 'key' is provided or not, until partitions run out and the process starts over again. Note: There's a known issue that will cause uneven distribution when a new batch is created. See KAFKA-9965 for more detail.

    Implementing the org.apache.kafka.clients.producer.Partitioner interface allows you to plug in a custom partitioner.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • partitioner.ignore.keys

    When set to 'true' the producer won't use record keys to choose a partition. If 'false', producer would choose a partition based on a hash of the key when a key is present. Note: this setting has no effect if a custom partitioner is used.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:32768 (32 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than replica.lag.time.max.ms (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries.

    Type:int
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers.

    Type:string
    Default:PLAINTEXT
    Valid Values:(case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3'. This means that clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most use cases. Also see the config documentation for `ssl.protocol` to understand how it can impact the TLS version negotiation behavior.

    Type:list
    Default:TLSv1.2,TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3', which should be fine for most use cases. A typical alternative to the default is 'TLSv1.2'. Allowed values for this config are dependent on the JVM. Clients using the defaults for this config and 'ssl.enabled.protocols' will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', however, clients will not use 'TLSv1.3' even if it is one of the values in `ssl.enabled.protocols` and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • acks

    The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed:

    • acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1.
    • acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.
    • acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting.

    Note that enabling idempotence requires this config value to be 'all'. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled.

    Type:string
    Default:all
    Valid Values:[all, -1, 0, 1]
    Importance:low
  • enable.idempotence

    When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires max.in.flight.requests.per.connection to be less than or equal to 5 (with message ordering preserved for any allowable value), retries to be greater than 0, and acks must be 'all'.

    Idempotence is enabled by default if no conflicting configurations are set. If conflicting configurations are set and idempotence is not explicitly enabled, idempotence is disabled. If idempotence is explicitly enabled and conflicting configurations are set, a ConfigException is thrown.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • enable.metrics.push

    Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • interceptor.classes

    A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:low
  • max.in.flight.requests.per.connection

    The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this configuration is set to be greater than 1 and enable.idempotence is set to false, there is a risk of message reordering after a failed send due to retries (i.e., if retries are enabled); if retries are disabled or if enable.idempotence is set to true, ordering will be preserved. Additionally, enabling idempotence requires the value of this configuration to be less than or equal to 5, because broker only retains at most 5 batches for each producer. If the value is more than 5, previous batches may be removed on broker side.

    Type:int
    Default:5
    Valid Values:[1,...]
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.max.idle.ms

    Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[5000,...]
    Importance:low
  • metadata.recovery.rebootstrap.trigger.ms

    If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the brokers in the last known metadata for this interval, client repeats the bootstrap process using bootstrap.servers configuration.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.strategy

    Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to rebootstrap, the client repeats the bootstrap process using bootstrap.servers. Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. Rebootstrap is also triggered if connection cannot be established to any of the brokers for metadata.recovery.rebootstrap.trigger.ms milliseconds or if server requests rebootstrap.

    Type:string
    Default:rebootstrap
    Valid Values:(case insensitive) [REBOOTSTRAP, NONE]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation.

    Type:list
    Default:org.apache.kafka.common.metrics.JmxReporter
    Valid Values:non-null string
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics. It has three levels for recording metrics - info, debug, and trace.

    INFO level records only essential metrics necessary for monitoring system performance and health. It collects vital data without gathering too much detail, making it suitable for production environments where minimal overhead is desired.

    DEBUG level records most metrics, providing more detailed information about the system's operation. It's useful for development and testing environments where you need deeper insights to debug and fine-tune the application.

    TRACE level records all possible metrics, capturing every detail about the system's performance and operation. It's best for controlled environments where in-depth analysis is required, though it can introduce significant overhead.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • partitioner.adaptive.partitioning.enable

    When set to 'true', the producer will try to adapt to broker performance and produce more messages to partitions hosted on faster brokers. If 'false', producer will try to distribute messages uniformly. Note: this setting has no effect if a custom partitioner is used

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • partitioner.availability.timeout.ms

    If a broker cannot process produce requests from a partition for partitioner.availability.timeout.ms time, the partitioner treats that partition as not available. If the value is 0, this logic is disabled. Note: this setting has no effect if a custom partitioner is used or partitioner.adaptive.partitioning.enable is set to 'false'

    Type:long
    Default:0
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.max.ms

    The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms, then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.header.urlencode

    The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in accordance with RFC6749, see here for more details. The default value is set to 'false' for backward compatibility

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low
  • transaction.timeout.ms

    The maximum amount of time in milliseconds that a transaction will remain open before the coordinator proactively aborts it. The start of the transaction is set at the time that the first partition is added to it. If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a InvalidTxnTimeoutException error.

    Type:int
    Default:60000 (1 minute)
    Valid Values:
    Importance:low
  • transactional.id

    The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, enable.idempotence is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor.

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:low

Consumer Configs

Below is the configuration for the consumer:

  • key.deserializer

    Deserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • value.deserializer

    Deserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs used to establish the initial connection to the Kafka cluster. Clients use this list to bootstrap and discover the full set of Kafka brokers. While the order of servers in the list does not matter, we recommend including more than one server to ensure resilience if any servers are down. This list does not need to contain the entire set of brokers, as Kafka clients automatically manage and update connections to the cluster efficiently. This list must be in the form host1:port1,host2:port2,....

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:high
  • fetch.min.bytes

    The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as that many byte(s) of data is available or the fetch request times out waiting for data to arrive. Setting this to a larger value will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.

    Type:int
    Default:1
    Valid Values:[0,...]
    Importance:high
  • group.id

    A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • group.protocol

    The group protocol consumer should use. We currently support "classic" or "consumer". If "consumer" is specified, then the consumer group protocol will be used. Otherwise, the classic group protocol will be used.

    Type:string
    Default:classic
    Valid Values:(case insensitive) [CONSUMER, CLASSIC]
    Importance:high
  • heartbeat.interval.ms

    The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:high
  • max.partition.fetch.bytes

    The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size.

    Type:int
    Default:1048576 (1 mebibyte)
    Valid Values:[0,...]
    Importance:high
  • session.timeout.ms

    The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms. Note that this configuration is not supported when group.protocol is set to "consumer".

    Type:int
    Default:45000 (45 seconds)
    Valid Values:
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • allow.auto.create.topics

    Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using `auto.create.topics.enable` broker configuration. This configuration must be set to `true` when using brokers older than 0.11.0

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
  • auto.offset.reset

    What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):

    • earliest: automatically reset the offset to the earliest offset
    • latest: automatically reset the offset to the latest offset
    • by_duration:<duration>: automatically reset the offset to a configured <duration> from the current timestamp. <duration> must be specified in ISO8601 format (PnDTnHnMn.nS). Negative duration is not allowed.
    • none: throw exception to the consumer if no previous offset is found for the consumer's group
    • anything else: throw exception to the consumer.

    Note that altering partition numbers while setting this config to latest may cause message delivery loss since producers could start to send messages to newly added partitions (i.e. no initial offsets exist yet) before consumers reset their offsets.

    Type:string
    Default:latest
    Valid Values:[latest, earliest, none, by_duration:PnDTnHnMn.nS]
    Importance:medium
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips.

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:medium
  • default.api.timeout.ms

    Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a timeout parameter.

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:medium
  • enable.auto.commit

    If true the consumer's offset will be periodically committed in the background.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
  • exclude.internal.topics

    Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitly subscribe to an internal topic.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
  • fetch.max.bytes

    The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.

    Type:int
    Default:52428800 (50 mebibytes)
    Valid Values:[0,...]
    Importance:medium
  • group.instance.id

    A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:medium
  • group.remote.assignor

    The name of the server-side assignor to use. If not specified, the group coordinator will pick the first assignor defined in the broker config group.consumer.assignors.This configuration is applied only if group.protocol is set to "consumer".

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • isolation.level

    Controls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.

    Messages will always be returned in offset order. Hence, in read_committed mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, read_committed consumers will not be able to read up to the high watermark when there are in flight transactions.

    Further, when in read_committed the seekToEnd method will return the LSO

    Type:string
    Default:read_uncommitted
    Valid Values:[read_committed, read_uncommitted]
    Importance:medium
  • max.poll.interval.ms

    The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null group.instance.id which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of session.timeout.ms. This mirrors the behavior of a static consumer which has shutdown.

    Type:int
    Default:300000 (5 minutes)
    Valid Values:[1,...]
    Importance:medium
  • max.poll.records

    The maximum number of records returned in a single call to poll(). Note, that max.poll.records does not impact the underlying fetching behavior. The consumer will cache the records from each fetch request and returns them incrementally from each poll.

    Type:int
    Default:500
    Valid Values:[1,...]
    Importance:medium
  • partition.assignment.strategy

    A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used. Available options are:

    • org.apache.kafka.clients.consumer.RangeAssignor: Assigns partitions on a per-topic basis.
    • org.apache.kafka.clients.consumer.RoundRobinAssignor: Assigns partitions to consumers in a round-robin fashion.
    • org.apache.kafka.clients.consumer.StickyAssignor: Guarantees an assignment that is maximally balanced while preserving as many existing partition assignments as possible.
    • org.apache.kafka.clients.consumer.CooperativeStickyAssignor: Follows the same StickyAssignor logic, but allows for cooperative rebalancing.

    The default assignor is [RangeAssignor, CooperativeStickyAssignor], which will use the RangeAssignor by default, but allows upgrading to the CooperativeStickyAssignor with just a single rolling bounce that removes the RangeAssignor from the list.

    Implementing the org.apache.kafka.clients.consumer.ConsumerPartitionAssignor interface allows you to plug in a custom assignment strategy.

    Type:list
    Default:class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
    Valid Values:non-null string
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:65536 (64 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers.

    Type:string
    Default:PLAINTEXT
    Valid Values:(case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3'. This means that clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most use cases. Also see the config documentation for `ssl.protocol` to understand how it can impact the TLS version negotiation behavior.

    Type:list
    Default:TLSv1.2,TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3', which should be fine for most use cases. A typical alternative to the default is 'TLSv1.2'. Allowed values for this config are dependent on the JVM. Clients using the defaults for this config and 'ssl.enabled.protocols' will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', however, clients will not use 'TLSv1.3' even if it is one of the values in `ssl.enabled.protocols` and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • auto.commit.interval.ms

    The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true.

    Type:int
    Default:5000 (5 seconds)
    Valid Values:[0,...]
    Importance:low
  • check.crcs

    Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • client.rack

    A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • enable.metrics.push

    Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • fetch.max.wait.ms

    The maximum amount of time the server will block before answering the fetch request there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes. This config is used only for local log fetch. To tune the remote fetch maximum wait time, please refer to 'remote.fetch.max.wait.ms' broker config

    Type:int
    Default:500
    Valid Values:[0,...]
    Importance:low
  • interceptor.classes

    A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.consumer.ConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.

    Type:list
    Default:""
    Valid Values:non-null string
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.rebootstrap.trigger.ms

    If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the brokers in the last known metadata for this interval, client repeats the bootstrap process using bootstrap.servers configuration.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.strategy

    Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to rebootstrap, the client repeats the bootstrap process using bootstrap.servers. Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. Rebootstrap is also triggered if connection cannot be established to any of the brokers for metadata.recovery.rebootstrap.trigger.ms milliseconds or if server requests rebootstrap.

    Type:string
    Default:rebootstrap
    Valid Values:(case insensitive) [REBOOTSTRAP, NONE]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation.

    Type:list
    Default:org.apache.kafka.common.metrics.JmxReporter
    Valid Values:non-null string
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics. It has three levels for recording metrics - info, debug, and trace.

    INFO level records only essential metrics necessary for monitoring system performance and health. It collects vital data without gathering too much detail, making it suitable for production environments where minimal overhead is desired.

    DEBUG level records most metrics, providing more detailed information about the system's operation. It's useful for development and testing environments where you need deeper insights to debug and fine-tune the application.

    TRACE level records all possible metrics, capturing every detail about the system's performance and operation. It's best for controlled environments where in-depth analysis is required, though it can introduce significant overhead.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.max.ms

    The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms, then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.header.urlencode

    The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in accordance with RFC6749, see here for more details. The default value is set to 'false' for backward compatibility

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low

Kafka Connect Configs

Below is the configuration of the Kafka Connect framework.

  • config.storage.topic

    The name of the Kafka topic where connector configurations are stored

    Type:string
    Default:
    Valid Values:
    Importance:high
  • group.id

    A unique string that identifies the Connect cluster group this worker belongs to.

    Type:string
    Default:
    Valid Values:
    Importance:high
  • key.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • offset.storage.topic

    The name of the Kafka topic where source connector offsets are stored

    Type:string
    Default:
    Valid Values:
    Importance:high
  • status.storage.topic

    The name of the Kafka topic where connector and task status are stored

    Type:string
    Default:
    Valid Values:
    Importance:high
  • value.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs used to establish the initial connection to the Kafka cluster. Clients use this list to bootstrap and discover the full set of Kafka brokers. While the order of servers in the list does not matter, we recommend including more than one server to ensure resilience if any servers are down. This list does not need to contain the entire set of brokers, as Kafka clients automatically manage and update connections to the cluster efficiently. This list must be in the form host1:port1,host2:port2,....

    Type:list
    Default:localhost:9092
    Valid Values:
    Importance:high
  • exactly.once.source.support

    Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and their source offsets, and by proactively fencing out old task generations before bringing up new ones.
    To enable exactly-once source support on a new cluster, set this property to 'enabled'. To enable support on an existing cluster, first set to 'preparing' on every worker in the cluster, then set to 'enabled'. A rolling upgrade may be used for both changes. For more information on this feature, see the exactly-once source support documentation.

    Type:string
    Default:disabled
    Valid Values:(case insensitive) [DISABLED, ENABLED, PREPARING]
    Importance:high
  • heartbeat.interval.ms

    The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:high
  • rebalance.timeout.ms

    The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.

    Type:int
    Default:60000 (1 minute)
    Valid Values:
    Importance:high
  • session.timeout.ms

    The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms.

    Type:int
    Default:10000 (10 seconds)
    Valid Values:
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips.

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:medium
  • connector.client.config.override.policy

    Class name or alias of implementation of ConnectorClientConfigOverridePolicy. Defines what client configurations can be overridden by the connector. The default implementation is `All`, meaning connector configurations can override all client properties. The other possible policies in the framework include `None` to disallow connectors from overriding client properties, and `Principal` to allow connectors to override only client principals.

    Type:string
    Default:All
    Valid Values:
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:32768 (32 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:40000 (40 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers.

    Type:string
    Default:PLAINTEXT
    Valid Values:(case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3'. This means that clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most use cases. Also see the config documentation for `ssl.protocol` to understand how it can impact the TLS version negotiation behavior.

    Type:list
    Default:TLSv1.2,TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3', which should be fine for most use cases. A typical alternative to the default is 'TLSv1.2'. Allowed values for this config are dependent on the JVM. Clients using the defaults for this config and 'ssl.enabled.protocols' will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', however, clients will not use 'TLSv1.3' even if it is one of the values in `ssl.enabled.protocols` and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • worker.sync.timeout.ms

    When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.

    Type:int
    Default:3000 (3 seconds)
    Valid Values:
    Importance:medium
  • worker.unsync.backoff.ms

    When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.

    Type:int
    Default:300000 (5 minutes)
    Valid Values:
    Importance:medium
  • access.control.allow.methods

    Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • access.control.allow.origin

    Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • admin.listeners

    List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property).

    Type:list
    Default:null
    Valid Values:List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443.
    Importance:low
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • config.providers

    Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the interface ConfigProvider allows you to replace variable references in connector configurations, such as for externalized secrets.

    Type:list
    Default:""
    Valid Values:
    Importance:low
  • config.storage.replication.factor

    Replication factor used when creating the configuration storage topic

    Type:short
    Default:3
    Valid Values:Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default
    Importance:low
  • connect.protocol

    Compatibility mode for Kafka Connect Protocol

    Type:string
    Default:sessioned
    Valid Values:[eager, compatible, sessioned]
    Importance:low
  • header.converter

    HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.

    Type:class
    Default:org.apache.kafka.connect.storage.SimpleHeaderConverter
    Valid Values:
    Importance:low
  • inter.worker.key.generation.algorithm

    The algorithm to use for generating internal request keys. The algorithm 'HmacSHA256' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.

    Type:string
    Default:HmacSHA256
    Valid Values:Any KeyGenerator algorithm supported by the worker JVM
    Importance:low
  • inter.worker.key.size

    The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • inter.worker.key.ttl.ms

    The TTL of generated session keys used for internal request validation (in milliseconds)

    Type:int
    Default:3600000 (1 hour)
    Valid Values:[0,...,2147483647]
    Importance:low
  • inter.worker.signature.algorithm

    The algorithm used to sign internal requestsThe algorithm 'inter.worker.signature.algorithm' will be used as a default on JVMs that support it; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.

    Type:string
    Default:HmacSHA256
    Valid Values:Any MAC algorithm supported by the worker JVM
    Importance:low
  • inter.worker.verification.algorithms

    A list of permitted algorithms for verifying internal requests, which must include the algorithm used for the inter.worker.signature.algorithm property. The algorithm(s) '[HmacSHA256]' will be used as a default on JVMs that provide them; on other JVMs, no default is used and a value for this property must be manually specified in the worker config.

    Type:list
    Default:HmacSHA256
    Valid Values:A list of one or more MAC algorithms, each supported by the worker JVM
    Importance:low
  • listeners

    List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS.
    Specify hostname as 0.0.0.0 to bind to all interfaces.
    Leave hostname empty to bind to default interface.
    Examples of legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084

    Type:list
    Default:http://:8083
    Valid Values:List of comma-separated URLs, ex: http://localhost:8080,https://localhost:8443.
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.rebootstrap.trigger.ms

    If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the brokers in the last known metadata for this interval, client repeats the bootstrap process using bootstrap.servers configuration.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.strategy

    Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to rebootstrap, the client repeats the bootstrap process using bootstrap.servers. Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. Rebootstrap is also triggered if connection cannot be established to any of the brokers for metadata.recovery.rebootstrap.trigger.ms milliseconds or if server requests rebootstrap.

    Type:string
    Default:rebootstrap
    Valid Values:(case insensitive) [REBOOTSTRAP, NONE]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation.

    Type:list
    Default:org.apache.kafka.common.metrics.JmxReporter
    Valid Values:
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics. It has three levels for recording metrics - info, debug, and trace.

    INFO level records only essential metrics necessary for monitoring system performance and health. It collects vital data without gathering too much detail, making it suitable for production environments where minimal overhead is desired.

    DEBUG level records most metrics, providing more detailed information about the system's operation. It's useful for development and testing environments where you need deeper insights to debug and fine-tune the application.

    TRACE level records all possible metrics, capturing every detail about the system's performance and operation. It's best for controlled environments where in-depth analysis is required, though it can introduce significant overhead.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • offset.flush.interval.ms

    Interval at which to try committing offsets for tasks.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:low
  • offset.flush.timeout.ms

    Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt. This property has no effect for source connectors running with exactly-once support.

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:low
  • offset.storage.partitions

    The number of partitions used when creating the offset storage topic

    Type:int
    Default:25
    Valid Values:Positive number, or -1 to use the broker's default
    Importance:low
  • offset.storage.replication.factor

    Replication factor used when creating the offset storage topic

    Type:short
    Default:3
    Valid Values:Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default
    Importance:low
  • plugin.discovery

    Method to use to discover plugins present in the classpath and plugin.path configuration. This can be one of multiple values with the following meanings:
    * only_scan: Discover plugins only by reflection. Plugins which are not discoverable by ServiceLoader will not impact worker startup.
    * hybrid_warn: Discover plugins reflectively and by ServiceLoader. Plugins which are not discoverable by ServiceLoader will print warnings during worker startup.
    * hybrid_fail: Discover plugins reflectively and by ServiceLoader. Plugins which are not discoverable by ServiceLoader will cause worker startup to fail.
    * service_load: Discover plugins only by ServiceLoader. Faster startup than other modes. Plugins which are not discoverable by ServiceLoader may not be usable.

    Type:string
    Default:hybrid_warn
    Valid Values:(case insensitive) [ONLY_SCAN, SERVICE_LOAD, HYBRID_WARN, HYBRID_FAIL]
    Importance:low
  • plugin.path

    List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of:
    a) directories immediately containing jars with plugins and their dependencies
    b) uber-jars with plugins and their dependencies
    c) directories immediately containing the package directory structure of classes of plugins and their dependencies
    Note: symlinks will be followed to discover dependencies or plugins.
    Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors
    Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • response.http.headers.config

    Rules for REST API HTTP response headers

    Type:string
    Default:""
    Valid Values:Comma-separated header rules, where each header rule is of the form '[action] [header name]:[header value]' and optionally surrounded by double quotes if any part of a header rule contains a comma
    Importance:low
  • rest.advertised.host.name

    If this is set, this is the hostname that will be given out to other workers to connect to.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • rest.advertised.listener

    Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • rest.advertised.port

    If this is set, this is the port that will be given out to other workers to connect to.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • rest.extension.classes

    Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing the interface ConnectRestExtension allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc.

    Type:list
    Default:""
    Valid Values:
    Importance:low
  • retry.backoff.max.ms

    The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms, then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.header.urlencode

    The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in accordance with RFC6749, see here for more details. The default value is set to 'false' for backward compatibility

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • scheduled.rebalance.max.delay.ms

    The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned

    Type:int
    Default:300000 (5 minutes)
    Valid Values:[0,...,2147483647]
    Importance:low
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:[0,...]
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.client.auth

    Configures kafka broker to request client authentication. The following settings are common:

    • ssl.client.auth=required If set to required client authentication is required.
    • ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself
    • ssl.client.auth=none This means client authentication is not needed.

    Type:string
    Default:none
    Valid Values:[required, requested, none]
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low
  • status.storage.partitions

    The number of partitions used when creating the status storage topic

    Type:int
    Default:5
    Valid Values:Positive number, or -1 to use the broker's default
    Importance:low
  • status.storage.replication.factor

    Replication factor used when creating the status storage topic

    Type:short
    Default:3
    Valid Values:Positive number not larger than the number of brokers in the Kafka cluster, or -1 to use the broker's default
    Importance:low
  • task.shutdown.graceful.timeout.ms

    Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.

    Type:long
    Default:5000 (5 seconds)
    Valid Values:
    Importance:low
  • topic.creation.enable

    Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with `topic.creation.` properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • topic.tracking.allow.reset

    If set to true, it allows user requests to reset the set of active topics per connector.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • topic.tracking.enable

    Enable tracking the set of active topics per connector during runtime.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low

Source Connector Configs

Below is the configuration of a source connector.

  • name

    Globally unique name to use for this connector.

    Type:string
    Default:
    Valid Values:non-empty string without ISO control characters
    Importance:high
  • connector.class

    Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter

    Type:string
    Default:
    Valid Values:
    Importance:high
  • tasks.max

    Maximum number of tasks to use for this connector.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
  • tasks.max.enforce

    (Deprecated) Whether to enforce that the tasks.max property is respected by the connector. By default, connectors that generate too many tasks will fail, and existing sets of tasks that exceed the tasks.max property will also be failed. If this property is set to false, then connectors will be allowed to generate more than the maximum number of tasks, and existing sets of tasks that exceed the tasks.max property will be allowed to run. This property is deprecated and will be removed in an upcoming major release.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • key.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor
    Importance:low
  • value.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor
    Importance:low
  • header.converter

    HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.HeaderConverter, A class with a public, no-argument constructor
    Importance:low
  • config.action.reload

    The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.

    Type:string
    Default:restart
    Valid Values:[none, restart]
    Importance:low
  • transforms

    Aliases for the transformations to be applied to records.

    Type:list
    Default:""
    Valid Values:non-null string, unique transformation aliases
    Importance:low
  • predicates

    Aliases for the predicates used by transformations.

    Type:list
    Default:""
    Valid Values:non-null string, unique predicate aliases
    Importance:low
  • errors.retry.timeout

    The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
  • errors.retry.delay.max.ms

    The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:medium
  • errors.tolerance

    Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.

    Type:string
    Default:none
    Valid Values:[none, all]
    Importance:medium
  • errors.log.enable

    If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • errors.log.include.messages

    Whether to include in the log the Connect record that resulted in a failure. For sink records, the topic, partition, offset, and timestamp will be logged. For source records, the key and value (and their schemas), all headers, and the timestamp, Kafka topic, Kafka partition, source partition, and source offset will be logged. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • topic.creation.groups

    Groups of configurations for topics created by source connectors

    Type:list
    Default:""
    Valid Values:non-null string, unique topic creation groups
    Importance:low
  • exactly.once.support

    Permitted values are requested, required. If set to "required", forces a preflight check for the connector to ensure that it can provide exactly-once semantics with the given configuration. Some connectors may be capable of providing exactly-once semantics but not signal to Connect that they support this; in that case, documentation for the connector should be consulted carefully before creating it, and the value for this property should be set to "requested". Additionally, if the value is set to "required" but the worker that performs preflight validation does not have exactly-once support enabled for source connectors, requests to create or validate the connector will fail.

    Type:string
    Default:requested
    Valid Values:(case insensitive) [REQUIRED, REQUESTED]
    Importance:medium
  • transaction.boundary

    Permitted values are: poll, interval, connector. If set to 'poll', a new producer transaction will be started and committed for every batch of records that each task from this connector provides to Connect. If set to 'connector', relies on connector-defined transaction boundaries; note that not all connectors are capable of defining their own transaction boundaries, and in that case, attempts to instantiate a connector with this value will fail. Finally, if set to 'interval', commits transactions only after a user-defined time interval has passed.

    Type:string
    Default:poll
    Valid Values:(case insensitive) [INTERVAL, POLL, CONNECTOR]
    Importance:medium
  • transaction.boundary.interval.ms

    If 'transaction.boundary' is set to 'interval', determines the interval for producer transaction commits by connector tasks. If unset, defaults to the value of the worker-level 'offset.flush.interval.ms' property. It has no effect if a different transaction.boundary is specified.

    Type:long
    Default:null
    Valid Values:[0,...]
    Importance:low
  • offsets.storage.topic

    The name of a separate offsets topic to use for this connector. If empty or not specified, the worker’s global offsets topic name will be used. If specified, the offsets topic will be created if it does not already exist on the Kafka cluster targeted by this connector (which may be different from the one used for the worker's global offsets topic if the bootstrap.servers property of the connector's producer has been overridden from the worker's). Only applicable in distributed mode; in standalone mode, setting this property will have no effect.

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:low

Sink Connector Configs

Below is the configuration of a sink connector.

  • name

    Globally unique name to use for this connector.

    Type:string
    Default:
    Valid Values:non-empty string without ISO control characters
    Importance:high
  • connector.class

    Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter

    Type:string
    Default:
    Valid Values:
    Importance:high
  • tasks.max

    Maximum number of tasks to use for this connector.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
  • topics

    List of topics to consume, separated by commas

    Type:list
    Default:""
    Valid Values:
    Importance:high
  • topics.regex

    Regular expression giving topics to consume. Under the hood, the regex is compiled to a java.util.regex.Pattern. Only one of topics or topics.regex should be specified.

    Type:string
    Default:""
    Valid Values:valid regex
    Importance:high
  • tasks.max.enforce

    (Deprecated) Whether to enforce that the tasks.max property is respected by the connector. By default, connectors that generate too many tasks will fail, and existing sets of tasks that exceed the tasks.max property will also be failed. If this property is set to false, then connectors will be allowed to generate more than the maximum number of tasks, and existing sets of tasks that exceed the tasks.max property will be allowed to run. This property is deprecated and will be removed in an upcoming major release.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • key.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor
    Importance:low
  • value.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor
    Importance:low
  • header.converter

    HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.HeaderConverter, A class with a public, no-argument constructor
    Importance:low
  • config.action.reload

    The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.

    Type:string
    Default:restart
    Valid Values:[none, restart]
    Importance:low
  • transforms

    Aliases for the transformations to be applied to records.

    Type:list
    Default:""
    Valid Values:non-null string, unique transformation aliases
    Importance:low
  • predicates

    Aliases for the predicates used by transformations.

    Type:list
    Default:""
    Valid Values:non-null string, unique predicate aliases
    Importance:low
  • errors.retry.timeout

    The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
  • errors.retry.delay.max.ms

    The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:medium
  • errors.tolerance

    Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.

    Type:string
    Default:none
    Valid Values:[none, all]
    Importance:medium
  • errors.log.enable

    If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • errors.log.include.messages

    Whether to include in the log the Connect record that resulted in a failure. For sink records, the topic, partition, offset, and timestamp will be logged. For source records, the key and value (and their schemas), all headers, and the timestamp, Kafka topic, Kafka partition, source partition, and source offset will be logged. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • errors.deadletterqueue.topic.name

    The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. The topic name is blank by default, which means that no messages are to be recorded in the DLQ.

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • errors.deadletterqueue.topic.replication.factor

    Replication factor used to create the dead letter queue topic when it doesn't already exist.

    Type:short
    Default:3
    Valid Values:
    Importance:medium
  • errors.deadletterqueue.context.headers.enable

    If true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers from the original record, all error context header keys, all error context header keys will start with __connect.errors.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium

Kafka Streams Configs

Below is the configuration of the Kafka Streams client library.

  • application.id

    An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix.

    Type:string
    Default:
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs used to establish the initial connection to the Kafka cluster. Clients use this list to bootstrap and discover the full set of Kafka brokers. While the order of servers in the list does not matter, we recommend including more than one server to ensure resilience if any servers are down. This list does not need to contain the entire set of brokers, as Kafka clients automatically manage and update connections to the cluster efficiently. This list must be in the form host1:port1,host2:port2,....

    Type:list
    Default:
    Valid Values:
    Importance:high
  • num.standby.replicas

    The number of standby replicas for each task.

    Type:int
    Default:0
    Valid Values:
    Importance:high
  • state.dir

    Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem. Note that if not configured, then the default location will be different in each environment as it is computed using System.getProperty("java.io.tmpdir")

    Type:string
    Default:${java.io.tmpdir}
    Valid Values:
    Importance:high
  • acceptable.recovery.lag

    The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active task assignment. Upon assignment, it will still restore the rest of the changelog before processing. To avoid a pause in processing during rebalances, this config should correspond to a recovery time of well under a minute for a given workload. Must be at least 0.

    Type:long
    Default:10000
    Valid Values:[0,...]
    Importance:medium
  • cache.max.bytes.buffering

    Maximum number of memory bytes to be used for buffering across all threads

    Type:long
    Default:10485760
    Valid Values:[0,...]
    Importance:medium
  • client.id

    An ID prefix string used for the client IDs of internal (main, restore, and global) consumers , producers, and admin clients with pattern <client.id>-[Global]StreamThread[-<threadSequenceNumber>]-<consumer|producer|restore-consumer|global-consumer>.

    Type:string
    Default:<application.id>-<random-UUID>
    Valid Values:
    Importance:medium
  • default.deserialization.exception.handler

    Exception handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface.

    Type:class
    Default:org.apache.kafka.streams.errors.LogAndFailExceptionHandler
    Valid Values:
    Importance:medium
  • default.key.serde

    Default serializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.key.serde.inner

    Default inner class of list serde for key that implements the org.apache.kafka.common.serialization.Serde interface. This configuration will be read if and only if default.key.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.key.serde.type

    Default class for key that implements the java.util.List interface. This configuration will be read if and only if default.key.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde Note when list serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.list.key.serde.inner'

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.value.serde.inner

    Default inner class of list serde for value that implements the org.apache.kafka.common.serialization.Serde interface. This configuration will be read if and only if default.value.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.list.value.serde.type

    Default class for value that implements the java.util.List interface. This configuration will be read if and only if default.value.serde configuration is set to org.apache.kafka.common.serialization.Serdes.ListSerde Note when list serde class is used, one needs to set the inner serde class that implements the org.apache.kafka.common.serialization.Serde interface via 'default.list.value.serde.inner'

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • default.production.exception.handler

    Exception handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface.

    Type:class
    Default:org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
    Valid Values:
    Importance:medium
  • default.timestamp.extractor

    Default timestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface.

    Type:class
    Default:org.apache.kafka.streams.processor.FailOnInvalidTimestamp
    Valid Values:
    Importance:medium
  • default.value.serde

    Default serializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • deserialization.exception.handler

    Exception handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface.

    Type:class
    Default:org.apache.kafka.streams.errors.LogAndFailExceptionHandler
    Valid Values:
    Importance:medium
  • max.task.idle.ms

    This config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in milliseconds a stream task will stay idle when it is fully caught up on some (but not all) input partitions to wait for producers to send additional records and avoid potential out-of-order record processing across multiple input streams. The default (zero) does not wait for producers to send more records, but it does wait to fetch data that is already present on the brokers. This default means that for records that are already present on the brokers, Streams will process them in timestamp order. Set to -1 to disable idling entirely and process any locally available data, even though doing so may produce out-of-order processing.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
  • max.warmup.replicas

    The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker traffic and cluster state can be used for high availability. Must be at least 1.Note that one warmup replica corresponds to one Stream Task. Furthermore, note that each warmup replica can only be promoted to an active task during a rebalance (normally during a so-called probing rebalance, which occur at a frequency specified by the `probing.rebalance.interval.ms` config). This means that the maximum rate at which active tasks can be migrated from one Kafka Streams Instance to another instance can be determined by (`max.warmup.replicas` / `probing.rebalance.interval.ms`).

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:medium
  • num.stream.threads

    The number of threads to execute stream processing.

    Type:int
    Default:1
    Valid Values:
    Importance:medium
  • processing.exception.handler

    Exception handling class that implements the org.apache.kafka.streams.errors.ProcessingExceptionHandler interface.

    Type:class
    Default:org.apache.kafka.streams.errors.LogAndFailProcessingExceptionHandler
    Valid Values:
    Importance:medium
  • processing.guarantee

    The processing guarantee that should be used. Possible values are at_least_once (default) and exactly_once_v2 (requires brokers version 2.5 or higher). Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting transaction.state.log.replication.factor and transaction.state.log.min.isr.

    Type:string
    Default:at_least_once
    Valid Values:[at_least_once, exactly_once_v2]
    Importance:medium
  • production.exception.handler

    Exception handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface.

    Type:class
    Default:org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
    Valid Values:
    Importance:medium
  • replication.factor

    The replication factor for change log topics and repartition topics created by the stream processing application. The default of -1 (meaning: use broker default replication factor) requires broker version 2.4 or newer

    Type:int
    Default:-1
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers.

    Type:string
    Default:PLAINTEXT
    Valid Values:(case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT]
    Importance:medium
  • statestore.cache.max.bytes

    Maximum number of memory bytes to be used for statestore cache across all threads

    Type:long
    Default:10485760 (10 mebibytes)
    Valid Values:[0,...]
    Importance:medium
  • task.assignor.class

    A task assignor class or class name implementing the org.apache.kafka.streams.processor.assignment.TaskAssignor interface. Defaults to the HighAvailabilityTaskAssignor class.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • task.timeout.ms

    The maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a timeout of 0ms, a task would raise an error for the first internal error. For any timeout larger than 0ms, a task will retry at least once before an error is raised.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:medium
  • topology.optimization

    A configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: "+NO_OPTIMIZATION+", "+OPTIMIZE+", or a comma separated list of specific optimizations: ("+REUSE_KTABLE_SOURCE_TOPICS+", "+MERGE_REPARTITION_TOPICS+" + "SINGLE_STORE_SELF_JOIN+")."NO_OPTIMIZATION" by default.

    Type:string
    Default:none
    Valid Values:[all, none, reuse.ktable.source.topics, merge.repartition.topics, single.store.self.join]
    Importance:medium
  • application.server

    A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this KafkaStreams instance.

    Type:string
    Default:""
    Valid Values:
    Importance:low
  • buffered.records.per.partition

    Maximum number of records to buffer per partition.

    Type:int
    Default:1000
    Valid Values:
    Importance:low
  • built.in.metrics.version

    Version of the built-in metrics to use.

    Type:string
    Default:latest
    Valid Values:[latest]
    Importance:low
  • commit.interval.ms

    The frequency in milliseconds with which to commit processing progress. For at-least-once processing, committing means to save the position (ie, offsets) of the processor. For exactly-once processing, it means to commit the transaction which includes to save the position and to make the committed data in the output topic visible to consumers with isolation level read_committed. (Note, if processing.guarantee is set to exactly_once_v2, the default value is 100, otherwise the default value is 30000.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:540000 (9 minutes)
    Valid Values:
    Importance:low
  • default.client.supplier

    Client supplier class that implements the org.apache.kafka.streams.KafkaClientSupplier interface.

    Type:class
    Default:org.apache.kafka.streams.processor.internals.DefaultKafkaClientSupplier
    Valid Values:
    Importance:low
  • default.dsl.store

    The default state store type used by DSL operators.

    Type:string
    Default:rocksDB
    Valid Values:[rocksDB, in_memory]
    Importance:low
  • dsl.store.suppliers.class

    Defines which store implementations to plug in to DSL operators. Must implement the org.apache.kafka.streams.state.DslStoreSuppliers interface.

    Type:class
    Default:org.apache.kafka.streams.state.BuiltInDslStoreSuppliers$RocksDBDslStoreSuppliers
    Valid Values:
    Importance:low
  • enable.metrics.push

    Whether to enable pushing of internal client metrics for (main, restore, and global) consumers, producers, and admin clients. The cluster must have a client metrics subscription which corresponds to a client.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • log.summary.interval.ms

    The output interval in milliseconds for logging summary information.
    If greater or equal to 0, the summary log will be output according to the set time interval;
    If less than 0, summary output is disabled.

    Type:long
    Default:120000 (2 minutes)
    Valid Values:
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.rebootstrap.trigger.ms

    If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the brokers in the last known metadata for this interval, client repeats the bootstrap process using bootstrap.servers configuration.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.strategy

    Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to rebootstrap, the client repeats the bootstrap process using bootstrap.servers. Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. Rebootstrap is also triggered if connection cannot be established to any of the brokers for metadata.recovery.rebootstrap.trigger.ms milliseconds or if server requests rebootstrap.

    Type:string
    Default:rebootstrap
    Valid Values:(case insensitive) [REBOOTSTRAP, NONE]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation.

    Type:list
    Default:org.apache.kafka.common.metrics.JmxReporter
    Valid Values:
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics. It has three levels for recording metrics - info, debug, and trace.

    INFO level records only essential metrics necessary for monitoring system performance and health. It collects vital data without gathering too much detail, making it suitable for production environments where minimal overhead is desired.

    DEBUG level records most metrics, providing more detailed information about the system's operation. It's useful for development and testing environments where you need deeper insights to debug and fine-tune the application.

    TRACE level records all possible metrics, capturing every detail about the system's performance and operation. It's best for controlled environments where in-depth analysis is required, though it can introduce significant overhead.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • poll.ms

    The amount of time in milliseconds to block waiting for input.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • probing.rebalance.interval.ms

    The maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up and are ready to become active. Probing rebalances will continue to be triggered until the assignment is balanced. Must be at least 1 minute.

    Type:long
    Default:600000 (10 minutes)
    Valid Values:[60000,...]
    Importance:low
  • processor.wrapper.class

    A processor wrapper class or class name that implements the org.apache.kafka.streams.state.ProcessorWrapper interface. Must be passed in to the StreamsBuilder or Topology constructor in order to take effect

    Type:class
    Default:org.apache.kafka.streams.processor.internals.NoOpProcessorWrapper
    Valid Values:
    Importance:low
  • rack.aware.assignment.non_overlap_cost

    Cost associated with moving tasks from existing assignment. This config and rack.aware.assignment.traffic_cost controls whether the optimization algorithm favors minimizing cross rack traffic or minimize the movement of tasks in existing assignment. If set a larger value org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor will optimize to maintain the existing assignment. The default value is null which means it will use default non_overlap cost values in different assignors.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • rack.aware.assignment.strategy

    The strategy we use for rack aware assignment. Rack aware assignment will take client.rack and racks of TopicPartition into account when assigning tasks to minimize cross rack traffic. Valid settings are : none (default), which will disable rack aware assignment; min_traffic, which will compute minimum cross rack traffic assignment; balance_subtopology, which will compute minimum cross rack traffic and try to balance the tasks of same subtopologies across different clients

    Type:string
    Default:none
    Valid Values:[none, min_traffic, balance_subtopology]
    Importance:low
  • rack.aware.assignment.tags

    List of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will make a best-effort to distribute the standby tasks over each client tag dimension.

    Type:list
    Default:""
    Valid Values:List containing maximum of 5 elements
    Importance:low
  • rack.aware.assignment.traffic_cost

    Cost associated with cross rack traffic. This config and rack.aware.assignment.non_overlap_cost controls whether the optimization algorithm favors minimizing cross rack traffic or minimize the movement of tasks in existing assignment. If set a larger value org.apache.kafka.streams.processor.internals.assignment.RackAwareTaskAssignor will optimize for minimizing cross rack traffic. The default value is null which means it will use default traffic cost values in different assignors.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:32768 (32 kibibytes)
    Valid Values:[-1,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • repartition.purge.interval.ms

    The frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at least this value since the last purge, but may be delayed until later. (Note, unlike commit.interval.ms, the default for this value remains unchanged when processing.guarantee is set to exactly_once_v2).

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:40000 (40 seconds)
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • rocksdb.config.setter

    A Rocks DB config setter class or class name that implements the org.apache.kafka.streams.state.RocksDBConfigSetter interface

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:low
  • state.cleanup.delay.ms

    The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least state.cleanup.delay.ms will be removed

    Type:long
    Default:600000 (10 minutes)
    Valid Values:
    Importance:low
  • upgrade.from

    Allows live upgrading (and downgrading in some cases -- see upgrade guide) in a backward compatible way. Default is `null`. Please refer to the Kafka Streams upgrade guide for instructions on how and when to use this config. Note that when upgrading from 3.5 to a newer version it is never required to specify this config, while upgrading live directly to 4.0+ from 2.3 or below is no longer supported even with this config. Accepted values are "2.4", "2.5", "2.6", "2.7", "2.8", "3.0", "3.1", "3.2", "3.3", "3.4", "3.5", "3.6", "3.7", "3.8", "3.9", "(for upgrading from the corresponding old version).

    Type:string
    Default:null
    Valid Values:[null, 0.10.0, 0.10.1, 0.10.2, 0.11.0, 1.0, 1.1, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 3.0, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9]
    Importance:low
  • window.size.ms

    Sets window size for the deserializer in order to calculate window end times.

    Type:long
    Default:null
    Valid Values:
    Importance:low
  • windowed.inner.class.serde

    Default serializer / deserializer for the inner class of a windowed record. Must implement the org.apache.kafka.common.serialization.Serde interface. Note that setting this config in KafkaStreams application would result in an error as it is meant to be used only from Plain consumer client.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • windowstore.changelog.additional.retention.ms

    Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day

    Type:long
    Default:86400000 (1 day)
    Valid Values:
    Importance:low

Admin Configs

Below is the configuration of the Kafka Admin client library.

  • bootstrap.controllers

    A list of host/port pairs to use for establishing the initial connection to the KRaft controller quorum. This list should be in the form host1:port1,host2:port2,....

    Type:list
    Default:""
    Valid Values:
    Importance:high
  • bootstrap.servers

    A list of host/port pairs used to establish the initial connection to the Kafka cluster. Clients use this list to bootstrap and discover the full set of Kafka brokers. While the order of servers in the list does not matter, we recommend including more than one server to ensure resilience if any servers are down. This list does not need to contain the entire set of brokers, as Kafka clients automatically manage and update connections to the cluster efficiently. This list must be in the form host1:port1,host2:port2,....

    Type:list
    Default:""
    Valid Values:
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • client.dns.lookup

    Controls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips.

    Type:string
    Default:use_all_dns_ips
    Valid Values:[use_all_dns_ips, resolve_canonical_bootstrap_servers_only]
    Importance:medium
  • client.id

    An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

    Type:string
    Default:""
    Valid Values:
    Importance:medium
  • connections.max.idle.ms

    Close idle connections after the number of milliseconds specified by this config.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:
    Importance:medium
  • default.api.timeout.ms

    Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a timeout parameter.

    Type:int
    Default:60000 (1 minute)
    Valid Values:[0,...]
    Importance:medium
  • receive.buffer.bytes

    The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.

    Type:int
    Default:65536 (64 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • request.timeout.ms

    The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.

    Type:int
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:medium
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers.

    Type:string
    Default:PLAINTEXT
    Valid Values:(case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT]
    Importance:medium
  • send.buffer.bytes

    The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.

    Type:int
    Default:131072 (128 kibibytes)
    Valid Values:[-1,...]
    Importance:medium
  • socket.connection.setup.timeout.max.ms

    The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:
    Importance:medium
  • socket.connection.setup.timeout.ms

    The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3'. This means that clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most use cases. Also see the config documentation for `ssl.protocol` to understand how it can impact the TLS version negotiation behavior.

    Type:list
    Default:TLSv1.2,TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3', which should be fine for most use cases. A typical alternative to the default is 'TLSv1.2'. Allowed values for this config are dependent on the JVM. Clients using the defaults for this config and 'ssl.enabled.protocols' will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', however, clients will not use 'TLSv1.3' even if it is one of the values in `ssl.enabled.protocols` and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • enable.metrics.push

    Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client.

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
  • metadata.max.age.ms

    The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.rebootstrap.trigger.ms

    If a client configured to rebootstrap using metadata.recovery.strategy=rebootstrap is unable to obtain metadata from any of the brokers in the last known metadata for this interval, client repeats the bootstrap process using bootstrap.servers configuration.

    Type:long
    Default:300000 (5 minutes)
    Valid Values:[0,...]
    Importance:low
  • metadata.recovery.strategy

    Controls how the client recovers when none of the brokers known to it is available. If set to none, the client fails. If set to rebootstrap, the client repeats the bootstrap process using bootstrap.servers. Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. Rebootstrap is also triggered if connection cannot be established to any of the brokers for metadata.recovery.rebootstrap.trigger.ms milliseconds or if server requests rebootstrap.

    Type:string
    Default:rebootstrap
    Valid Values:(case insensitive) [REBOOTSTRAP, NONE]
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation.

    Type:list
    Default:org.apache.kafka.common.metrics.JmxReporter
    Valid Values:
    Importance:low
  • metrics.num.samples

    The number of samples maintained to compute metrics.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:low
  • metrics.recording.level

    The highest recording level for metrics. It has three levels for recording metrics - info, debug, and trace.

    INFO level records only essential metrics necessary for monitoring system performance and health. It collects vital data without gathering too much detail, making it suitable for production environments where minimal overhead is desired.

    DEBUG level records most metrics, providing more detailed information about the system's operation. It's useful for development and testing environments where you need deeper insights to debug and fine-tune the application.

    TRACE level records all possible metrics, capturing every detail about the system's performance and operation. It's best for controlled environments where in-depth analysis is required, though it can introduce significant overhead.

    Type:string
    Default:INFO
    Valid Values:[INFO, DEBUG, TRACE]
    Importance:low
  • metrics.sample.window.ms

    The window of time a metrics sample is computed over.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.max.ms

    The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • reconnect.backoff.ms

    The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value.

    Type:long
    Default:50
    Valid Values:[0,...]
    Importance:low
  • retries

    Setting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is recommended to set the value to either zero or `MAX_VALUE` and use corresponding timeout parameters to control how long a client should retry a request.

    Type:int
    Default:2147483647
    Valid Values:[0,...,2147483647]
    Importance:low
  • retry.backoff.max.ms

    The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms, then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase

    Type:long
    Default:1000 (1 second)
    Valid Values:[0,...]
    Importance:low
  • retry.backoff.ms

    The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.header.urlencode

    The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in accordance with RFC6749, see here for more details. The default value is set to 'false' for backward compatibility

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • security.providers

    A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low

MirrorMaker Configs

Below is the configuration of the connectors that make up MirrorMaker 2.

MirrorMaker Common Configs

Below are the common configuration properties that apply to all three connectors.

  • source.cluster.alias

    Alias of source cluster

    Type:string
    Default:
    Valid Values:
    Importance:high
  • ssl.key.password

    The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.certificate.chain

    Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.key

    Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.location

    The location of the key store file. This is optional for client and can be used for two-way authentication for client.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.keystore.password

    The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.certificates

    Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.location

    The location of the trust store file.

    Type:string
    Default:null
    Valid Values:
    Importance:high
  • ssl.truststore.password

    The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format.

    Type:password
    Default:null
    Valid Values:
    Importance:high
  • target.cluster.alias

    Alias of target cluster. Used in metrics reporting.

    Type:string
    Default:target
    Valid Values:
    Importance:high
  • sasl.client.callback.handler.class

    The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.jaas.config

    JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here. The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*;. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;

    Type:password
    Default:null
    Valid Values:
    Importance:medium
  • sasl.kerberos.service.name

    The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.callback.handler.class

    The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.login.class

    The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin

    Type:class
    Default:null
    Valid Values:
    Importance:medium
  • sasl.mechanism

    SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.

    Type:string
    Default:GSSAPI
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.jwks.endpoint.url

    The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • sasl.oauthbearer.token.endpoint.url

    The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • security.protocol

    Protocol used to communicate with brokers.

    Type:string
    Default:PLAINTEXT
    Valid Values:(case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT]
    Importance:medium
  • ssl.enabled.protocols

    The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3'. This means that clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most use cases. Also see the config documentation for `ssl.protocol` to understand how it can impact the TLS version negotiation behavior.

    Type:list
    Default:TLSv1.2,TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.keystore.type

    The file format of the key store file. This is optional for client. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • ssl.protocol

    The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3', which should be fine for most use cases. A typical alternative to the default is 'TLSv1.2'. Allowed values for this config are dependent on the JVM. Clients using the defaults for this config and 'ssl.enabled.protocols' will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', however, clients will not use 'TLSv1.3' even if it is one of the values in `ssl.enabled.protocols` and the server only supports 'TLSv1.3'.

    Type:string
    Default:TLSv1.3
    Valid Values:
    Importance:medium
  • ssl.provider

    The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • ssl.truststore.type

    The file format of the trust store file. The values currently supported by the default `ssl.engine.factory.class` are [JKS, PKCS12, PEM].

    Type:string
    Default:JKS
    Valid Values:
    Importance:medium
  • admin.timeout.ms

    Timeout for administrative tasks, e.g. detecting new topics.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:low
  • enabled

    Whether to replicate source->target.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • forwarding.admin.class

    Class which extends ForwardingAdmin to define custom cluster resource management (topics, configs, etc). The class must have a constructor with signature (Map config) that is used to configure a KafkaAdminClient and may also be used to configure clients for external systems if necessary.

    Type:class
    Default:org.apache.kafka.clients.admin.ForwardingAdmin
    Valid Values:
    Importance:low
  • metric.reporters

    A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation.

    Type:list
    Default:org.apache.kafka.common.metrics.JmxReporter
    Valid Values:
    Importance:low
  • replication.policy.class

    Class which defines the remote topic naming convention.

    Type:class
    Default:org.apache.kafka.connect.mirror.DefaultReplicationPolicy
    Valid Values:
    Importance:low
  • replication.policy.internal.topic.separator.enabled

    Whether to use replication.policy.separator to control the names of topics used for checkpoints and offset syncs. By default, custom separators are used in these topic names; however, if upgrading MirrorMaker 2 from older versions that did not allow for these topic names to be customized, it may be necessary to set this property to 'false' in order to continue using the same names for those topics.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • replication.policy.separator

    Separator used in remote topic naming convention.

    Type:string
    Default:.
    Valid Values:
    Importance:low
  • sasl.kerberos.kinit.cmd

    Kerberos kinit command path.

    Type:string
    Default:/usr/bin/kinit
    Valid Values:
    Importance:low
  • sasl.kerberos.min.time.before.relogin

    Login thread sleep time between refresh attempts.

    Type:long
    Default:60000
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.jitter

    Percentage of random jitter added to the renewal time.

    Type:double
    Default:0.05
    Valid Values:
    Importance:low
  • sasl.kerberos.ticket.renew.window.factor

    Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.

    Type:double
    Default:0.8
    Valid Values:
    Importance:low
  • sasl.login.connect.timeout.ms

    The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.read.timeout.ms

    The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.

    Type:int
    Default:null
    Valid Values:
    Importance:low
  • sasl.login.refresh.buffer.seconds

    The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:300
    Valid Values:[0,...,3600]
    Importance:low
  • sasl.login.refresh.min.period.seconds

    The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.

    Type:short
    Default:60
    Valid Values:[0,...,900]
    Importance:low
  • sasl.login.refresh.window.factor

    Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.8
    Valid Values:[0.5,...,1.0]
    Importance:low
  • sasl.login.refresh.window.jitter

    The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.

    Type:double
    Default:0.05
    Valid Values:[0.0,...,0.25]
    Importance:low
  • sasl.login.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.login.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.clock.skew.seconds

    The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.

    Type:int
    Default:30
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.audience

    The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.expected.issuer

    The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • sasl.oauthbearer.header.urlencode

    The (optional) setting to enable the OAuth client to URL-encode the client_id and client_secret in the authorization header in accordance with RFC6749, see here for more details. The default value is set to 'false' for backward compatibility

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.refresh.ms

    The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT.

    Type:long
    Default:3600000 (1 hour)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms

    The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:10000 (10 seconds)
    Valid Values:
    Importance:low
  • sasl.oauthbearer.jwks.endpoint.retry.backoff.ms

    The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • sasl.oauthbearer.scope.claim.name

    The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:scope
    Valid Values:
    Importance:low
  • sasl.oauthbearer.sub.claim.name

    The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim.

    Type:string
    Default:sub
    Valid Values:
    Importance:low
  • ssl.cipher.suites

    A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.

    Type:list
    Default:null
    Valid Values:
    Importance:low
  • ssl.endpoint.identification.algorithm

    The endpoint identification algorithm to validate server hostname using server certificate.

    Type:string
    Default:https
    Valid Values:
    Importance:low
  • ssl.engine.factory.class

    The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one.

    Type:class
    Default:null
    Valid Values:
    Importance:low
  • ssl.keymanager.algorithm

    The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:SunX509
    Valid Values:
    Importance:low
  • ssl.secure.random.implementation

    The SecureRandom PRNG implementation to use for SSL cryptography operations.

    Type:string
    Default:null
    Valid Values:
    Importance:low
  • ssl.trustmanager.algorithm

    The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.

    Type:string
    Default:PKIX
    Valid Values:
    Importance:low
  • name

    Globally unique name to use for this connector.

    Type:string
    Default:
    Valid Values:non-empty string without ISO control characters
    Importance:high
  • connector.class

    Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name, or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter

    Type:string
    Default:
    Valid Values:
    Importance:high
  • tasks.max

    Maximum number of tasks to use for this connector.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:high
  • tasks.max.enforce

    (Deprecated) Whether to enforce that the tasks.max property is respected by the connector. By default, connectors that generate too many tasks will fail, and existing sets of tasks that exceed the tasks.max property will also be failed. If this property is set to false, then connectors will be allowed to generate more than the maximum number of tasks, and existing sets of tasks that exceed the tasks.max property will be allowed to run. This property is deprecated and will be removed in an upcoming major release.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • key.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor
    Importance:low
  • value.converter

    Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.Converter, A class with a public, no-argument constructor
    Importance:low
  • header.converter

    HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.

    Type:class
    Default:null
    Valid Values:A concrete subclass of org.apache.kafka.connect.storage.HeaderConverter, A class with a public, no-argument constructor
    Importance:low
  • config.action.reload

    The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.

    Type:string
    Default:restart
    Valid Values:[none, restart]
    Importance:low
  • transforms

    Aliases for the transformations to be applied to records.

    Type:list
    Default:""
    Valid Values:non-null string, unique transformation aliases
    Importance:low
  • predicates

    Aliases for the predicates used by transformations.

    Type:list
    Default:""
    Valid Values:non-null string, unique predicate aliases
    Importance:low
  • errors.retry.timeout

    The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.

    Type:long
    Default:0
    Valid Values:
    Importance:medium
  • errors.retry.delay.max.ms

    The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.

    Type:long
    Default:60000 (1 minute)
    Valid Values:
    Importance:medium
  • errors.tolerance

    Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.

    Type:string
    Default:none
    Valid Values:[none, all]
    Importance:medium
  • errors.log.enable

    If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • errors.log.include.messages

    Whether to include in the log the Connect record that resulted in a failure. For sink records, the topic, partition, offset, and timestamp will be logged. For source records, the key and value (and their schemas), all headers, and the timestamp, Kafka topic, Kafka partition, source partition, and source offset will be logged. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium

MirrorMaker Source Configs

Below is the configuration of MirrorMaker 2 source connector for replicating topics.

  • config.properties.exclude

    Topic config properties that should not be replicated. Supports comma-separated property names and regexes.

    Type:list
    Default:follower\.replication\.throttled\.replicas,leader\.replication\.throttled\.replicas,message\.timestamp\.difference\.max\.ms,message\.timestamp\.type,unclean\.leader\.election\.enable,min\.insync\.replicas
    Valid Values:
    Importance:high
  • topics

    Topics to replicate. Supports comma-separated topic names and regexes.

    Type:list
    Default:.*
    Valid Values:
    Importance:high
  • topics.exclude

    Excluded topics. Supports comma-separated topic names and regexes. Excludes take precedence over includes.

    Type:list
    Default:mm2.*\.internal,.*\.replica,__.*
    Valid Values:
    Importance:high
  • config.property.filter.class

    ConfigPropertyFilter to use. Selects topic config properties to replicate.

    Type:class
    Default:org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter
    Valid Values:
    Importance:low
  • consumer.poll.timeout.ms

    Timeout when polling source cluster.

    Type:long
    Default:1000 (1 second)
    Valid Values:
    Importance:low
  • emit.offset-syncs.enabled

    Whether to store the new offset of the replicated records in offset-syncs topic or not. MirrorCheckpointConnector will not be able to sync group offsets or emit checkpoints if emit.checkpoints.enabled and/or sync.group.offsets.enabled are enabled while emit.offset-syncs.enabled is disabled.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • heartbeats.replication.enabled

    Whether to replicate the heartbeats topics even when the topic filter does not include them. If set to true, heartbeats topics identified by the replication policy will always be replicated, regardless of the topic filter configuration. If set to false, heartbeats topics will only be replicated if the topic filter allows.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • offset-syncs.topic.location

    The location (source/target) of the offset-syncs topic.

    Type:string
    Default:source
    Valid Values:[source, target]
    Importance:low
  • offset-syncs.topic.replication.factor

    Replication factor for offset-syncs topic.

    Type:short
    Default:3
    Valid Values:
    Importance:low
  • offset.lag.max

    How out-of-sync a remote partition can be before it is resynced.

    Type:long
    Default:100
    Valid Values:
    Importance:low
  • refresh.topics.enabled

    Whether to periodically check for new topics and partitions.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • refresh.topics.interval.seconds

    Frequency of topic refresh.

    Type:long
    Default:600
    Valid Values:
    Importance:low
  • replication.factor

    Replication factor for newly created remote topics.

    Type:int
    Default:2
    Valid Values:
    Importance:low
  • sync.topic.acls.enabled

    Whether to periodically configure remote topic ACLs to match their corresponding upstream topics.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • sync.topic.acls.interval.seconds

    Frequency of topic ACL sync.

    Type:long
    Default:600
    Valid Values:
    Importance:low
  • sync.topic.configs.enabled

    Whether to periodically configure remote topics to match their corresponding upstream topics.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • sync.topic.configs.interval.seconds

    Frequency of topic config sync.

    Type:long
    Default:600
    Valid Values:
    Importance:low
  • topic.filter.class

    TopicFilter to use. Selects topics to replicate.

    Type:class
    Default:org.apache.kafka.connect.mirror.DefaultTopicFilter
    Valid Values:
    Importance:low

MirrorMaker Checkpoint Configs

Below is the configuration of MirrorMaker 2 checkpoint connector for emitting consumer offset checkpoints.

  • groups

    Consumer groups to replicate. Supports comma-separated group IDs and regexes.

    Type:list
    Default:.*
    Valid Values:
    Importance:high
  • groups.exclude

    Exclude groups. Supports comma-separated group IDs and regexes. Excludes take precedence over includes.

    Type:list
    Default:console-consumer-.*,connect-.*,__.*
    Valid Values:
    Importance:high
  • checkpoints.topic.replication.factor

    Replication factor for checkpoints topic.

    Type:short
    Default:3
    Valid Values:
    Importance:low
  • consumer.poll.timeout.ms

    Timeout when polling source cluster.

    Type:long
    Default:1000 (1 second)
    Valid Values:
    Importance:low
  • emit.checkpoints.enabled

    Whether to replicate consumer offsets to target cluster.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • emit.checkpoints.interval.seconds

    Frequency of checkpoints.

    Type:long
    Default:60
    Valid Values:
    Importance:low
  • group.filter.class

    GroupFilter to use. Selects consumer groups to replicate.

    Type:class
    Default:org.apache.kafka.connect.mirror.DefaultGroupFilter
    Valid Values:
    Importance:low
  • offset-syncs.topic.location

    The location (source/target) of the offset-syncs topic.

    Type:string
    Default:source
    Valid Values:[source, target]
    Importance:low
  • refresh.groups.enabled

    Whether to periodically check for new consumer groups.

    Type:boolean
    Default:true
    Valid Values:
    Importance:low
  • refresh.groups.interval.seconds

    Frequency of group refresh.

    Type:long
    Default:600
    Valid Values:
    Importance:low
  • sync.group.offsets.enabled

    Whether to periodically write the translated offsets to __consumer_offsets topic in target cluster, as long as no active consumers in that group are connected to the target cluster

    Type:boolean
    Default:false
    Valid Values:
    Importance:low
  • sync.group.offsets.interval.seconds

    Frequency of consumer group offset sync.

    Type:long
    Default:60
    Valid Values:
    Importance:low
  • topic.filter.class

    TopicFilter to use. Selects topics to replicate.

    Type:class
    Default:org.apache.kafka.connect.mirror.DefaultTopicFilter
    Valid Values:
    Importance:low

MirrorMaker HeartBeat Configs

Below is the configuration of MirrorMaker 2 heartbeat connector for checking connectivity between connectors and clusters.

System Properties

Kafka supports some configuration that can be enabled through Java system properties. System properties are usually set by passing the -D flag to the Java virtual machine in which Kafka components are running. Below are the supported system properties.

  • org.apache.kafka.sasl.oauthbearer.allowed.urls

This system property is used to set the allowed URLs as SASL OAUTHBEARER token or jwks endpoints. This property accepts comma-separated list of URLs. By default the value is an empty list.

If users want to enable some URLs, users need to explicitly set the system property like below.

    -Dorg.apache.kafka.sasl.oauthbearer.allowed.urls=https://www.example.com,file:///tmp/token
Since:4.0.0
Default Value:
  • org.apache.kafka.disallowed.login.modules

This system property is used to disable the problematic login modules usage in SASL JAAS configuration. This property accepts comma-separated list of loginModule names. By default com.sun.security.auth.module.JndiLoginModule loginModule is disabled.

If users want to enable JndiLoginModule, users need to explicitly reset the system property like below. We advise the users to validate configurations and only allow trusted JNDI configurations. For more details CVE-2023-25194.

    -Dorg.apache.kafka.disallowed.login.modules=

To disable more loginModules, update the system property with comma-separated loginModule names. Make sure to explicitly add JndiLoginModule module name to the comma-separated list like below.

    -Dorg.apache.kafka.disallowed.login.modules=com.sun.security.auth.module.JndiLoginModule,com.ibm.security.auth.module.LdapLoginModule,com.ibm.security.auth.module.Krb5LoginModule
Since:3.4.0
Default Value:com.sun.security.auth.module.JndiLoginModule
  • org.apache.kafka.automatic.config.providers

This system property controls the automatic loading of ConfigProvider implementations in Apache Kafka. ConfigProviders are used to dynamically supply configuration values from sources such as files, directories, or environment variables. This property accepts a comma-separated list of ConfigProvider names. By default, all built-in ConfigProviders are enabled, including FileConfigProvider , DirectoryConfigProvider , and EnvVarConfigProvider.

If users want to disable all automatic ConfigProviders, they need to explicitly set the system property as shown below. Disabling automatic ConfigProviders is recommended in environments where configuration data comes from untrusted sources or where increased security is required. For more details, see CVE-2024-31141.

    -Dorg.apache.kafka.automatic.config.providers=none

To allow specific ConfigProviders, update the system property with a comma-separated list of fully qualified ConfigProvider class names. For example, to enable only the EnvVarConfigProvider , set the property as follows:

    -Dorg.apache.kafka.automatic.config.providers=org.apache.kafka.common.config.provider.EnvVarConfigProvider

To use multiple ConfigProviders, include their names in a comma-separated list as shown below:

    -Dorg.apache.kafka.automatic.config.providers=org.apache.kafka.common.config.provider.FileConfigProvider,org.apache.kafka.common.config.provider.EnvVarConfigProvider
Since:3.8.0
Default Value:All built-in ConfigProviders are enabled

Tiered Storage Configs

Below are the configuration properties for Tiered Storage.

  • log.local.retention.bytes

    The maximum size of local log segments that can grow for a partition before it gets eligible for deletion. Default value is -2, it represents `log.retention.bytes` value to be used. The effective value should always be less than or equal to `log.retention.bytes` value.

    Type:long
    Default:-2
    Valid Values:[-2,...]
    Importance:medium
  • log.local.retention.ms

    The number of milliseconds to keep the local log segments before it gets eligible for deletion. Default value is -2, it represents `log.retention.ms` value is to be used. The effective value should always be less than or equal to `log.retention.ms` value.

    Type:long
    Default:-2
    Valid Values:[-2,...]
    Importance:medium
  • remote.fetch.max.wait.ms

    The maximum amount of time the server will wait before answering the remote fetch request

    Type:int
    Default:500
    Valid Values:[1,...]
    Importance:medium
  • remote.list.offsets.request.timeout.ms

    The maximum amount of time the server will wait for the remote list offsets request to complete.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.copier.thread.pool.size

    Size of the thread pool used in scheduling tasks to copy segments.

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.copy.max.bytes.per.second

    The maximum number of bytes that can be copied from local storage to remote storage per second. This is a global limit for all the partitions that are being copied from local storage to remote storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be copied per second.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.copy.quota.window.num

    The number of samples to retain in memory for remote copy quota management. The default value is 11, which means there are 10 whole windows + 1 current window.

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.copy.quota.window.size.seconds

    The time span of each sample for remote copy quota management. The default value is 1 second.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.expiration.thread.pool.size

    Size of the thread pool used in scheduling tasks to clean up the expired remote log segments.

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.fetch.max.bytes.per.second

    The maximum number of bytes that can be fetched from remote storage to local storage per second. This is a global limit for all the partitions that are being fetched from remote storage to local storage. The default value is Long.MAX_VALUE, which means there is no limit on the number of bytes that can be fetched per second.

    Type:long
    Default:9223372036854775807
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.fetch.quota.window.num

    The number of samples to retain in memory for remote fetch quota management. The default value is 11, which means there are 10 whole windows + 1 current window.

    Type:int
    Default:11
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.fetch.quota.window.size.seconds

    The time span of each sample for remote fetch quota management. The default value is 1 second.

    Type:int
    Default:1
    Valid Values:[1,...]
    Importance:medium
  • remote.log.manager.thread.pool.size

    Size of the thread pool used in scheduling follower tasks to read the highest-uploaded remote-offset for follower partitions.

    Type:int
    Default:2
    Valid Values:[1,...]
    Importance:medium
  • remote.log.metadata.manager.class.name

    Fully qualified class name of `RemoteLogMetadataManager` implementation.

    Type:string
    Default:org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
    Valid Values:non-empty string
    Importance:medium
  • remote.log.metadata.manager.class.path

    Class path of the `RemoteLogMetadataManager` implementation. If specified, the RemoteLogMetadataManager implementation and its dependent libraries will be loaded by a dedicated classloader which searches this class path before the Kafka broker class path. The syntax of this parameter is same as the standard Java class path string.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • remote.log.metadata.manager.impl.prefix

    Prefix used for properties to be passed to RemoteLogMetadataManager implementation. For example this value can be `rlmm.config.`.

    Type:string
    Default:rlmm.config.
    Valid Values:non-empty string
    Importance:medium
  • remote.log.metadata.manager.listener.name

    Listener name of the local broker to which it should get connected if needed by RemoteLogMetadataManager implementation.

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:medium
  • remote.log.reader.max.pending.tasks

    Maximum remote log reader thread pool task queue size. If the task queue is full, fetch requests are served with an error.

    Type:int
    Default:100
    Valid Values:[1,...]
    Importance:medium
  • remote.log.reader.threads

    Size of the thread pool that is allocated for handling remote log reads.

    Type:int
    Default:10
    Valid Values:[1,...]
    Importance:medium
  • remote.log.storage.manager.class.name

    Fully qualified class name of `RemoteStorageManager` implementation.

    Type:string
    Default:null
    Valid Values:non-empty string
    Importance:medium
  • remote.log.storage.manager.class.path

    Class path of the `RemoteStorageManager` implementation. If specified, the RemoteStorageManager implementation and its dependent libraries will be loaded by a dedicated classloader which searches this class path before the Kafka broker class path. The syntax of this parameter is same as the standard Java class path string.

    Type:string
    Default:null
    Valid Values:
    Importance:medium
  • remote.log.storage.manager.impl.prefix

    Prefix used for properties to be passed to RemoteStorageManager implementation. For example this value can be `rsm.config.`.

    Type:string
    Default:rsm.config.
    Valid Values:non-empty string
    Importance:medium
  • remote.log.storage.system.enable

    Whether to enable tiered storage functionality in a broker or not. When it is true broker starts all the services required for the tiered storage functionality.

    Type:boolean
    Default:false
    Valid Values:
    Importance:medium
  • remote.log.index.file.cache.total.size.bytes

    The total size of the space allocated to store index files fetched from remote storage in the local storage.

    Type:long
    Default:1073741824 (1 gibibyte)
    Valid Values:[1,...]
    Importance:low
  • remote.log.manager.task.interval.ms

    Interval at which remote log manager runs the scheduled tasks like copy segments, and clean up remote log segments.

    Type:long
    Default:30000 (30 seconds)
    Valid Values:[1,...]
    Importance:low
  • remote.log.metadata.custom.metadata.max.bytes

    The maximum size of custom metadata in bytes that the broker should accept from a remote storage plugin. If custom metadata exceeds this limit, the updated segment metadata will not be stored, the copied data will be attempted to delete, and the remote copying task for this topic-partition will stop with an error.

    Type:int
    Default:128
    Valid Values:[0,...]
    Importance:low
  • remote.log.metadata.consume.wait.ms

    The amount of time in milliseconds to wait for the local consumer to receive the published event.

    Type:long
    Default:120000 (2 minutes)
    Valid Values:[0,...]
    Importance:low
  • remote.log.metadata.initialization.retry.interval.ms

    The retry interval in milliseconds for retrying RemoteLogMetadataManager resources initialization again.

    Type:long
    Default:100
    Valid Values:[0,...]
    Importance:low
  • remote.log.metadata.initialization.retry.max.timeout.ms

    The maximum amount of time in milliseconds for retrying RemoteLogMetadataManager resources initialization. When total retry intervals reach this timeout, initialization is considered as failed and broker starts shutting down.

    Type:long
    Default:120000 (2 minutes)
    Valid Values:[0,...]
    Importance:low
  • remote.log.metadata.topic.num.partitions

    The number of partitions for remote log metadata topic.

    Type:int
    Default:50
    Valid Values:[1,...]
    Importance:low
  • remote.log.metadata.topic.replication.factor

    Replication factor of remote log metadata topic.

    Type:short
    Default:3
    Valid Values:[1,...]
    Importance:low
  • remote.log.metadata.topic.retention.ms

    Retention of remote log metadata topic in milliseconds. Default: -1, that means unlimited. Users can configure this value based on their use cases. To avoid any data loss, this value should be more than the maximum retention period of any topic enabled with tiered storage in the cluster.

    Type:long
    Default:-1
    Valid Values:
    Importance:low

Configuration Providers

Use configuration providers to load configuration data from external sources. This might include sensitive information, such as passwords, API keys, or other credentials.

You have the following options:

To use a configuration provider, specify it in your configuration using the config.providers property.

Using Configuration Providers

Configuration providers allow you to pass parameters and retrieve configuration data from various sources.

To specify configuration providers, you use a comma-separated list of aliases and the fully-qualified class names that implement the configuration providers:

config.providers=provider1,provider2
config.providers.provider1.class=com.example.Provider1
config.providers.provider2.class=com.example.Provider2

Each provider can have its own set of parameters, which are passed in a specific format:

config.providers.<provider_alias>.param.<name>=<value>

The ConfigProvider interface serves as a base for all configuration providers. Custom implementations of this interface can be created to retrieve configuration data from various sources. You can package the implementation as a JAR file, add the JAR to your classpath, and reference the provider’s class in your configuration.

Example custom provider configuration

config.providers=customProvider
config.providers.customProvider.class=com.example.customProvider
config.providers.customProvider.param.param1=value1
config.providers.customProvider.param.param2=value2

DirectoryConfigProvider

The DirectoryConfigProvider retrieves configuration data from files stored in a specified directory.

Each file represents a key, and its content is the value. This provider is useful for loading multiple configuration files and for organizing configuration data into separate files.

To restrict the files that the DirectoryConfigProvider can access, use the allowed.paths parameter. This parameter accepts a comma-separated list of paths that the provider is allowed to access. If not set, all paths are allowed.

ExampleDirectoryConfigProvider configuration

config.providers=dirProvider
config.providers.dirProvider.class=org.apache.kafka.common.config.provider.DirectoryConfigProvider
config.providers.dirProvider.param.allowed.paths=/path/to/dir1,/path/to/dir2

To reference a value supplied by the DirectoryConfigProvider, use the correct placeholder syntax:

${dirProvider:<path_to_file>:<file_name>}

EnvVarConfigProvider

The EnvVarConfigProvider retrieves configuration data from environment variables.

No specific parameters are required, as it reads directly from the specified environment variables.

This provider is useful for configuring applications running in containers, for example, to load certificates or JAAS configuration from environment variables mapped from secrets.

To restrict which environment variables the EnvVarConfigProvider can access, use the allowlist.pattern parameter. This parameter accepts a regular expression that environment variable names must match to be used by the provider.

ExampleEnvVarConfigProvider configuration

config.providers=envVarProvider
config.providers.envVarProvider.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider
config.providers.envVarProvider.param.allowlist.pattern=^MY_ENVAR1_.*

To reference a value supplied by the EnvVarConfigProvider, use the correct placeholder syntax:

${envVarProvider:<enVar_name>}

FileConfigProvider

The FileConfigProvider retrieves configuration data from a single properties file.

This provider is useful for loading configuration data from mounted files.

To restrict the file paths that the FileConfigProvider can access, use the allowed.paths parameter. This parameter accepts a comma-separated list of paths that the provider is allowed to access. If not set, all paths are allowed.

ExampleFileConfigProvider configuration

config.providers=fileProvider
config.providers.fileProvider.class=org.apache.kafka.common.config.provider.FileConfigProvider
config.providers.fileProvider.param.allowed.paths=/path/to/config1,/path/to/config2

To reference a value supplied by the FileConfigProvider, use the correct placeholder syntax:

${fileProvider:<path_and_filename>:<property>}

Example: Referencing files

Here’s an example that uses a file configuration provider with Kafka Connect to provide authentication credentials to a database for a connector.

First, create a connector-credentials.properties configuration file with the following credentials:

dbUsername=my-username
dbPassword=my-password

Specify a FileConfigProvider in the Kafka Connect configuration:

Example Kafka Connect configuration with aFileConfigProvider

config.providers=fileProvider
config.providers.fileProvider.class=org.apache.kafka.common.config.provider.FileConfigProvider

Next, reference the properties from the file in the connector configuration.

Example connector configuration referencing file properties

database.user=${fileProvider:/path/to/connector-credentials.properties:dbUsername}
database.password=${fileProvider:/path/to/connector-credentials.properties:dbPassword}

At runtime, the configuration provider reads and extracts the values from the properties file.

4 - Design

4.1 - Design

Motivation

We designed Kafka to be able to act as a unified platform for handling all the real-time data feeds a large company might have. To do this we had to think through a fairly broad set of use cases.

It would have to have high-throughput to support high volume event streams such as real-time log aggregation.

It would need to deal gracefully with large data backlogs to be able to support periodic data loads from offline systems.

It also meant the system would have to handle low-latency delivery to handle more traditional messaging use-cases.

We wanted to support partitioned, distributed, real-time processing of these feeds to create new, derived feeds. This motivated our partitioning and consumer model.

Finally in cases where the stream is fed into other data systems for serving, we knew the system would have to be able to guarantee fault-tolerance in the presence of machine failures.

Supporting these uses led us to a design with a number of unique elements, more akin to a database log than a traditional messaging system. We will outline some elements of the design in the following sections.

Persistence

Don’t fear the filesystem!

Kafka relies heavily on the filesystem for storing and caching messages. There is a general perception that “disks are slow” which makes people skeptical that a persistent structure can offer competitive performance. In fact disks are both much slower and much faster than people expect depending on how they are used; and a properly designed disk structure can often be as fast as the network.

The key fact about disk performance is that the throughput of hard drives has been diverging from the latency of a disk seek for the last decade. As a result the performance of linear writes on a JBOD configuration with six 7200rpm SATA RAID-5 array is about 600MB/sec but the performance of random writes is only about 100k/sec–a difference of over 6000X. These linear reads and writes are the most predictable of all usage patterns, and are heavily optimized by the operating system. A modern operating system provides read-ahead and write-behind techniques that prefetch data in large block multiples and group smaller logical writes into large physical writes. A further discussion of this issue can be found in this ACM Queue article; they actually find that sequential disk access can in some cases be faster than random memory access!

To compensate for this performance divergence, modern operating systems have become increasingly aggressive in their use of main memory for disk caching. A modern OS will happily divert all free memory to disk caching with little performance penalty when the memory is reclaimed. All disk reads and writes will go through this unified cache. This feature cannot easily be turned off without using direct I/O, so even if a process maintains an in-process cache of the data, this data will likely be duplicated in OS pagecache, effectively storing everything twice.

Furthermore, we are building on top of the JVM, and anyone who has spent any time with Java memory usage knows two things:

  1. The memory overhead of objects is very high, often doubling the size of the data stored (or worse).
  2. Java garbage collection becomes increasingly fiddly and slow as the in-heap data increases.

As a result of these factors using the filesystem and relying on pagecache is superior to maintaining an in-memory cache or other structure–we at least double the available cache by having automatic access to all free memory, and likely double again by storing a compact byte structure rather than individual objects. Doing so will result in a cache of up to 28-30GB on a 32GB machine without GC penalties. Furthermore, this cache will stay warm even if the service is restarted, whereas the in-process cache will need to be rebuilt in memory (which for a 10GB cache may take 10 minutes) or else it will need to start with a completely cold cache (which likely means terrible initial performance). This also greatly simplifies the code as all logic for maintaining coherency between the cache and filesystem is now in the OS, which tends to do so more efficiently and more correctly than one-off in-process attempts. If your disk usage favors linear reads then read-ahead is effectively pre-populating this cache with useful data on each disk read.

This suggests a design which is very simple: rather than maintain as much as possible in-memory and flush it all out to the filesystem in a panic when we run out of space, we invert that. All data is immediately written to a persistent log on the filesystem without necessarily flushing to disk. In effect this just means that it is transferred into the kernel’s pagecache.

This style of pagecache-centric design is described in an article on the design of Varnish here (along with a healthy dose of arrogance).

Constant Time Suffices

The persistent data structure used in messaging systems are often a per-consumer queue with an associated BTree or other general-purpose random access data structures to maintain metadata about messages. BTrees are the most versatile data structure available, and make it possible to support a wide variety of transactional and non-transactional semantics in the messaging system. They do come with a fairly high cost, though: Btree operations are O(log N). Normally O(log N) is considered essentially equivalent to constant time, but this is not true for disk operations. Disk seeks come at 10 ms a pop, and each disk can do only one seek at a time so parallelism is limited. Hence even a handful of disk seeks leads to very high overhead. Since storage systems mix very fast cached operations with very slow physical disk operations, the observed performance of tree structures is often superlinear as data increases with fixed cache–i.e. doubling your data makes things much worse than twice as slow.

Intuitively a persistent queue could be built on simple reads and appends to files as is commonly the case with logging solutions. This structure has the advantage that all operations are O(1) and reads do not block writes or each other. This has obvious performance advantages since the performance is completely decoupled from the data size–one server can now take full advantage of a number of cheap, low-rotational speed 1+TB SATA drives. Though they have poor seek performance, these drives have acceptable performance for large reads and writes and come at 1/3 the price and 3x the capacity.

Having access to virtually unlimited disk space without any performance penalty means that we can provide some features not usually found in a messaging system. For example, in Kafka, instead of attempting to delete messages as soon as they are consumed, we can retain messages for a relatively long period (say a week). This leads to a great deal of flexibility for consumers, as we will describe.

Efficiency

We have put significant effort into efficiency. One of our primary use cases is handling web activity data, which is very high volume: each page view may generate dozens of writes. Furthermore, we assume each message published is read by at least one consumer (often many), hence we strive to make consumption as cheap as possible.

We have also found, from experience building and running a number of similar systems, that efficiency is a key to effective multi-tenant operations. If the downstream infrastructure service can easily become a bottleneck due to a small bump in usage by the application, such small changes will often create problems. By being very fast we help ensure that the application will tip-over under load before the infrastructure. This is particularly important when trying to run a centralized service that supports dozens or hundreds of applications on a centralized cluster as changes in usage patterns are a near-daily occurrence.

We discussed disk efficiency in the previous section. Once poor disk access patterns have been eliminated, there are two common causes of inefficiency in this type of system: too many small I/O operations, and excessive byte copying.

The small I/O problem happens both between the client and the server and in the server’s own persistent operations.

To avoid this, our protocol is built around a “message set” abstraction that naturally groups messages together. This allows network requests to group messages together and amortize the overhead of the network roundtrip rather than sending a single message at a time. The server in turn appends chunks of messages to its log in one go, and the consumer fetches large linear chunks at a time.

This simple optimization produces orders of magnitude speed up. Batching leads to larger network packets, larger sequential disk operations, contiguous memory blocks, and so on, all of which allows Kafka to turn a bursty stream of random message writes into linear writes that flow to the consumers.

The other inefficiency is in byte copying. At low message rates this is not an issue, but under load the impact is significant. To avoid this we employ a standardized binary message format that is shared by the producer, the broker, and the consumer (so data chunks can be transferred without modification between them).

The message log maintained by the broker is itself just a directory of files, each populated by a sequence of message sets that have been written to disk in the same format used by the producer and consumer. Maintaining this common format allows optimization of the most important operation: network transfer of persistent log chunks. Modern unix operating systems offer a highly optimized code path for transferring data out of pagecache to a socket; in Linux this is done with the sendfile system call.

To understand the impact of sendfile, it is important to understand the common data path for transfer of data from file to socket:

  1. The operating system reads data from the disk into pagecache in kernel space
  2. The application reads the data from kernel space into a user-space buffer
  3. The application writes the data back into kernel space into a socket buffer
  4. The operating system copies the data from the socket buffer to the NIC buffer where it is sent over the network

This is clearly inefficient, there are four copies and two system calls. Using sendfile, this re-copying is avoided by allowing the OS to send the data from pagecache to the network directly. So in this optimized path, only the final copy to the NIC buffer is needed.

We expect a common use case to be multiple consumers on a topic. Using the zero-copy optimization above, data is copied into pagecache exactly once and reused on each consumption instead of being stored in memory and copied out to user-space every time it is read. This allows messages to be consumed at a rate that approaches the limit of the network connection.

This combination of pagecache and sendfile means that on a Kafka cluster where the consumers are mostly caught up you will see no read activity on the disks whatsoever as they will be serving data entirely from cache.

TLS/SSL libraries operate at the user space (in-kernel SSL_sendfile is currently not supported by Kafka). Due to this restriction, sendfile is not used when SSL is enabled. For enabling SSL configuration, refer to security.protocol and security.inter.broker.protocol

For more background on the sendfile and zero-copy support in Java, see this article.

End-to-end Batch Compression

In some cases the bottleneck is actually not CPU or disk but network bandwidth. This is particularly true for a data pipeline that needs to send messages between data centers over a wide-area network. Of course, the user can always compress its messages one at a time without any support needed from Kafka, but this can lead to very poor compression ratios as much of the redundancy is due to repetition between messages of the same type (e.g. field names in JSON or user agents in web logs or common string values). Efficient compression requires compressing multiple messages together rather than compressing each message individually.

Kafka supports this with an efficient batching format. A batch of messages can be grouped together, compressed, and sent to the server in this form. The broker decompresses the batch in order to validate it. For example, it validates that the number of records in the batch is same as what batch header states. This batch of messages is then written to disk in compressed form. The batch will remain compressed in the log and it will also be transmitted to the consumer in compressed form. The consumer decompresses any compressed data that it receives.

Kafka supports GZIP, Snappy, LZ4 and ZStandard compression protocols. More details on compression can be found here.

The Producer

Load balancing

The producer sends data directly to the broker that is the leader for the partition without any intervening routing tier. To help the producer do this all Kafka nodes can answer a request for metadata about which servers are alive and where the leaders for the partitions of a topic are at any given time to allow the producer to appropriately direct its requests.

The client controls which partition it publishes messages to. This can be done at random, implementing a kind of random load balancing, or it can be done by some semantic partitioning function. We expose the interface for semantic partitioning by allowing the user to specify a key to partition by and using this to hash to a partition (there is also an option to override the partition function if need be). For example if the key chosen was a user id then all data for a given user would be sent to the same partition. This in turn will allow consumers to make locality assumptions about their consumption. This style of partitioning is explicitly designed to allow locality-sensitive processing in consumers.

Asynchronous send

Batching is one of the big drivers of efficiency, and to enable batching the Kafka producer will attempt to accumulate data in memory and to send out larger batches in a single request. The batching can be configured to accumulate no more than a fixed number of messages and to wait no longer than some fixed latency bound (say 64k or 10 ms). This allows the accumulation of more bytes to send, and few larger I/O operations on the servers. This buffering is configurable and gives a mechanism to trade off a small amount of additional latency for better throughput.

Details on configuration and the api for the producer can be found elsewhere in the documentation.

The Consumer

The Kafka consumer works by issuing “fetch” requests to the brokers leading the partitions it wants to consume. The consumer specifies its offset in the log with each request and receives back a chunk of log beginning from that position. The consumer thus has significant control over this position and can rewind it to re-consume data if need be.

Push vs. pull

An initial question we considered is whether consumers should pull data from brokers or brokers should push data to the consumer. In this respect Kafka follows a more traditional design, shared by most messaging systems, where data is pushed to the broker from the producer and pulled from the broker by the consumer. Some logging-centric systems, such as Scribe and Apache Flume, follow a very different push-based path where data is pushed downstream. There are pros and cons to both approaches. However, a push-based system has difficulty dealing with diverse consumers as the broker controls the rate at which data is transferred. The goal is generally for the consumer to be able to consume at the maximum possible rate; unfortunately, in a push system this means the consumer tends to be overwhelmed when its rate of consumption falls below the rate of production (a denial of service attack, in essence). A pull-based system has the nicer property that the consumer simply falls behind and catches up when it can. This can be mitigated with some kind of backoff protocol by which the consumer can indicate it is overwhelmed, but getting the rate of transfer to fully utilize (but never over-utilize) the consumer is trickier than it seems. Previous attempts at building systems in this fashion led us to go with a more traditional pull model.

Another advantage of a pull-based system is that it lends itself to aggressive batching of data sent to the consumer. A push-based system must choose to either send a request immediately or accumulate more data and then send it later without knowledge of whether the downstream consumer will be able to immediately process it. If tuned for low latency, this will result in sending a single message at a time only for the transfer to end up being buffered anyway, which is wasteful. A pull-based design fixes this as the consumer always pulls all available messages after its current position in the log (or up to some configurable max size). So one gets optimal batching without introducing unnecessary latency.

The deficiency of a naive pull-based system is that if the broker has no data the consumer may end up polling in a tight loop, effectively busy-waiting for data to arrive. To avoid this we have parameters in our pull request that allow the consumer request to block in a “long poll” waiting until data arrives (and optionally waiting until a given number of bytes is available to ensure large transfer sizes).

You could imagine other possible designs which would be only pull, end-to-end. The producer would locally write to a local log, and brokers would pull from that with consumers pulling from them. A similar type of “store-and-forward” producer is often proposed. This is intriguing but we felt not very suitable for our target use cases which have thousands of producers. Our experience running persistent data systems at scale led us to feel that involving thousands of disks in the system across many applications would not actually make things more reliable and would be a nightmare to operate. And in practice we have found that we can run a pipeline with strong SLAs at large scale without a need for producer persistence.

Consumer Position

Keeping track of what has been consumed is, surprisingly, one of the key performance points of a messaging system.

Most messaging systems keep metadata about what messages have been consumed on the broker. That is, as a message is handed out to a consumer, the broker either records that fact locally immediately or it may wait for acknowledgement from the consumer. This is a fairly intuitive choice, and indeed for a single machine server it is not clear where else this state could go. Since the data structures used for storage in many messaging systems scale poorly, this is also a pragmatic choice–since the broker knows what is consumed it can immediately delete it, keeping the data size small.

What is perhaps not obvious is that getting the broker and consumer to come into agreement about what has been consumed is not a trivial problem. If the broker records a message as consumed immediately every time it is handed out over the network, then if the consumer fails to process the message (say because it crashes or the request times out or whatever) that message will be lost. To solve this problem, many messaging systems add an acknowledgement feature which means that messages are only marked as sent not consumed when they are sent; the broker waits for a specific acknowledgement from the consumer to record the message as consumed. This strategy fixes the problem of losing messages, but creates new problems. First of all, if the consumer processes the message but fails before it can send an acknowledgement then the message will be consumed twice. The second problem is around performance, now the broker must keep multiple states about every single message (first to lock it so it is not given out a second time, and then to mark it as permanently consumed so that it can be removed). Tricky problems must be dealt with, like what to do with messages that are sent but never acknowledged.

Kafka handles this differently. Our topic is divided into a set of totally ordered partitions, each of which is consumed by exactly one consumer within each subscribing consumer group at any given time. This means that the position of a consumer in each partition is just a single integer, the offset of the next message to consume. This makes the state about what has been consumed very small, just one number for each partition. This state can be periodically checkpointed. This makes the equivalent of message acknowledgements very cheap.

There is a side benefit of this decision. A consumer can deliberately rewind back to an old offset and re-consume data. This violates the common contract of a queue, but turns out to be an essential feature for many consumers. For example, if the consumer code has a bug and is discovered after some messages are consumed, the consumer can re-consume those messages once the bug is fixed.

Offline Data Load

Scalable persistence allows for the possibility of consumers that only periodically consume such as batch data loads that periodically bulk-load data into an offline system such as Hadoop or a relational data warehouse.

In the case of Hadoop we parallelize the data load by splitting the load over individual map tasks, one for each node/topic/partition combination, allowing full parallelism in the loading. Hadoop provides the task management, and tasks which fail can restart without danger of duplicate data–they simply restart from their original position.

Static Membership

Static membership aims to improve the availability of stream applications, consumer groups and other applications built on top of the group rebalance protocol. The rebalance protocol relies on the group coordinator to allocate entity ids to group members. These generated ids are ephemeral and will change when members restart and rejoin. For consumer based apps, this “dynamic membership” can cause a large percentage of tasks re-assigned to different instances during administrative operations such as code deploys, configuration updates and periodic restarts. For large state applications, shuffled tasks need a long time to recover their local states before processing and cause applications to be partially or entirely unavailable. Motivated by this observation, Kafka’s group management protocol allows group members to provide persistent entity ids. Group membership remains unchanged based on those ids, thus no rebalance will be triggered.

If you want to use static membership,

  • Upgrade both broker cluster and client apps to 2.3 or beyond, and also make sure the upgraded brokers are using inter.broker.protocol.version of 2.3 or beyond as well.
  • Set the config ConsumerConfig#GROUP_INSTANCE_ID_CONFIG to a unique value for each consumer instance under one group.
  • For Kafka Streams applications, it is sufficient to set a unique ConsumerConfig#GROUP_INSTANCE_ID_CONFIG per KafkaStreams instance, independent of the number of used threads for an instance.

If your broker is on an older version than 2.3, but you choose to set ConsumerConfig#GROUP_INSTANCE_ID_CONFIG on the client side, the application will detect the broker version and then throws an UnsupportedException. If you accidentally configure duplicate ids for different instances, a fencing mechanism on broker side will inform your duplicate client to shutdown immediately by triggering a org.apache.kafka.common.errors.FencedInstanceIdException. For more details, see KIP-345

Message Delivery Semantics

Now that we understand a little about how producers and consumers work, let’s discuss the semantic guarantees Kafka provides between producer and consumer. Clearly there are multiple possible message delivery guarantees that could be provided:

  • At most once –Messages may be lost but are never redelivered.
  • At least once –Messages are never lost but may be redelivered.
  • Exactly once –Each message is processed once and only once.

It’s worth noting that this breaks down into two problems: the durability guarantees for publishing a message and the guarantees when consuming a message.

Many systems claim to provide “exactly-once” delivery semantics, but it is important to read the fine print, because sometimes these claims are misleading (i.e. they don’t translate to the case where consumers or producers can fail, cases where there are multiple consumer processes, or cases where data written to disk can be lost).

Kafka’s semantics are straightforward. When publishing a message we have a notion of the message being “committed” to the log. Once a published message is committed, it will not be lost as long as one broker that replicates the partition to which this message was written remains “alive”. The definition of committed message and alive partition as well as a description of which types of failures we attempt to handle will be described in more detail in the next section. For now let’s assume a perfect, lossless broker and try to understand the guarantees to the producer and consumer. If a producer attempts to publish a message and experiences a network error, it cannot be sure if this error happened before or after the message was committed. This is similar to the semantics of inserting into a database table with an autogenerated key.

Prior to 0.11.0.0, if a producer failed to receive a response indicating that a message was committed, it had little choice but to resend the message. This provides at-least-once delivery semantics since the message may be written to the log again during resending if the original request had in fact succeeded. Since 0.11.0.0, the Kafka producer also supports an idempotent delivery option which guarantees that resending will not result in duplicate entries in the log. To achieve this, the broker assigns each producer an ID and deduplicates messages using a sequence number that is sent by the producer along with every message. Also beginning with 0.11.0.0, the producer supports the ability to send messages atomically to multiple topic partitions using transactions, so that either all messages are successfully written or none of them are.

Not all use cases require such strong guarantees. For use cases which are latency-sensitive, we allow the producer to specify the durability level it desires. If the producer specifies that it wants to wait on the message being committed, this can take on the order of 10 ms. However the producer can also specify that it wants to perform the send completely asynchronously or that it wants to wait only until the leader (but not necessarily the followers) have the message.

Now let’s describe the semantics from the point of view of the consumer. All replicas have the exact same log with the same offsets. The consumer controls its position in this log. If the consumer never crashed it could just store this position in memory, but if the consumer fails and we want this topic partition to be taken over by another process, the new process will need to choose an appropriate position from which to start processing. Let’s say the consumer reads some messages – it has several options for processing the messages and updating its position.

  1. It can read the messages, then save its position in the log, and finally process the messages. In this case there is a possibility that the consumer process crashes after saving its position but before saving the output of its message processing. In this case the process that took over processing would start at the saved position even though a few messages prior to that position had not been processed. This corresponds to “at-most-once” semantics as in the case of a consumer failure messages may not be processed.
  2. It can read the messages, process the messages, and finally save its position. In this case there is a possibility that the consumer process crashes after processing messages but before saving its position. In this case when the new process takes over the first few messages it receives will already have been processed. This corresponds to the “at-least-once” semantics in the case of consumer failure. In many cases messages have a primary key and so the updates are idempotent (receiving the same message twice just overwrites a record with another copy of itself).

So what about exactly-once semantics? When consuming from a Kafka topic and producing to another topic (as in a Kafka Streams application), we can leverage the new transactional producer capabilities in 0.11.0.0 that were mentioned above. The consumer’s position is stored as a message in an internal topic, so we can write the offset to Kafka in the same transaction as the output topics receiving the processed data. If the transaction is aborted, the consumer’s stored position will revert to its old value (although the consumer has to refetch the committed offset because it does not automatically rewind) and the produced data on the output topics will not be visible to other consumers, depending on their “isolation level”. In the default “read_uncommitted” isolation level, all messages are visible to consumers even if they were part of an aborted transaction, but in “read_committed” isolation level, the consumer will only return messages from transactions which were committed (and any messages which were not part of a transaction).

When writing to an external system, the limitation is in the need to coordinate the consumer’s position with what is actually stored as output. The classic way of achieving this would be to introduce a two-phase commit between the storage of the consumer position and the storage of the consumers output. This can be handled more simply and generally by letting the consumer store its offset in the same place as its output. This is better because many of the output systems a consumer might want to write to will not support a two-phase commit. As an example of this, consider a Kafka Connect connector which populates data in HDFS along with the offsets of the data it reads so that it is guaranteed that either data and offsets are both updated or neither is. We follow similar patterns for many other data systems which require these stronger semantics and for which the messages do not have a primary key to allow for deduplication.

As a result, Kafka supports exactly-once delivery in Kafka Streams, and the transactional producer and the consumer using read-committed isolation level can be used generally to provide exactly-once delivery when reading, processing and writing data on Kafka topics. Exactly-once delivery for other destination systems generally requires cooperation with such systems, but Kafka provides the primitives which makes implementing this feasible (see also Kafka Connect). Otherwise, Kafka guarantees at-least-once delivery by default, and allows the user to implement at-most-once delivery by disabling retries on the producer and committing offsets in the consumer prior to processing a batch of messages.

Using Transactions

As mentioned above, the simplest way to get exactly-once semantics from Kafka is to use Kafka Streams. However, it is also possible to achieve the same transactional guarantees using the Kafka producer and consumer directly by using them in the same way as Kafka Streams does.

Kafka transactions are a bit different from transactions in other messaging systems. In Kafka, the consumer and producer are separate, and it is only the producer which is transactional. It is however able to make transactional updates to the consumer’s position (confusingly called the “committed offset”), and it is this which gives the overall exactly-once behavior.

There are three key aspects to exactly-once processing using the producer and consumer, which match how Kafka Streams works.

  1. The consumer uses partition assignment to ensure that it is the only consumer in the consumer group currently processing each partition.
  2. The producer uses transactions so that all the records it produces, and any offsets it updates on behalf of the consumer, are performed atomically.
  3. In order to handle transactions properly in combination with rebalancing, it is advisable to use one producer instance for each consumer instance. More complicated and efficient schemes are possible, but at the cost of greater complexity.

In addition, it is generally considered a good practice to use the read-committed isolation level if trying to achieve exactly-once processing. Strictly speaking, the consumer doesn’t have to use read-committed isolation level, but if it does not, it will see records from aborted transactions and also open transactions which have not yet completed.

The consumer configuration must include isolation.level=read_committed and enable.auto.commit=false. The producer configuration must set transactional.id to the name of the transactional ID to be used, which configures the producer for transactional delivery and also makes sure that a restarted application causes any in-flight transaction from the previous instance to abort. Only the producer has the transactional.id configuration.

Here’s an example of a transactional message copier which uses these principles. It uses a KafkaConsumer to consume records from one topic and a KafkaProducer to produce records to another topic. It uses transactions to ensure that there is no duplication or loss of records as they are copied, provided that the --use-group-metadata option is set.

It is important to handle exceptions and aborted transactions correctly. Any records written by the transactional producer will be marked as being part of the transactions, and then when the transaction commits or aborts, transaction marker records are written to indicate the outcome of the transaction. This is how the read-committed consumer does not see records from aborted transactions. However, in the event of a transaction abort, the application’s state and in particular the current position of the consumer must be reset explicitly so that it can reprocess the records processed by the aborted transaction.

A simple policy for handling exceptions and aborted transactions is to discard and recreate the Kafka producer and consumer objects and start afresh. As part of recreating the consumer, the consumer group will rebalance and fetch the last committed offset, which has the effect of rewinding back to the state before the transaction aborted. Alternatively, a more sophisticated application (such as the transactional message copier) can choose not to use KafkaConsumer.committed to retrieve the committed offset from Kafka, and then KafkaConsumer.seek to rewind the current position.

Replication

Kafka replicates the log for each topic’s partitions across a configurable number of servers (you can set this replication factor on a topic-by-topic basis). This allows automatic failover to these replicas when a server in the cluster fails so messages remain available in the presence of failures.

Other messaging systems provide some replication-related features, but, in our (totally biased) opinion, this appears to be a tacked-on thing, not heavily used, and with large downsides: replicas are inactive, throughput is heavily impacted, it requires fiddly manual configuration, etc. Kafka is meant to be used with replication by default–in fact we implement un-replicated topics as replicated topics where the replication factor is one.

The unit of replication is the topic partition. Under non-failure conditions, each partition in Kafka has a single leader and zero or more followers. The total number of replicas including the leader constitute the replication factor. All writes go to the leader of the partition, and reads can go to the leader or the followers of the partition. Typically, there are many more partitions than brokers and the leaders are evenly distributed among brokers. The logs on the followers are identical to the leader’s log–all have the same offsets and messages in the same order (though, of course, at any given time the leader may have a few as-yet unreplicated messages at the end of its log).

Followers consume messages from the leader just as a normal Kafka consumer would and apply them to their own log. Having the followers pull from the leader has the nice property of allowing the follower to naturally batch together log entries they are applying to their log.

As with most distributed systems, automatically handling failures requires a precise definition of what it means for a node to be “alive.” In Kafka, a special node known as the “controller” is responsible for managing the registration of brokers in the cluster. Broker liveness has two conditions:

  1. Brokers must maintain an active session with the controller in order to receive regular metadata updates.
  2. Brokers acting as followers must replicate the writes from the leader and not fall “too far” behind.

What is meant by an “active session” depends on the cluster configuration. For KRaft clusters, an active session is maintained by sending periodic heartbeats to the controller. If the controller fails to receive a heartbeat before the timeout configured by broker.session.timeout.ms expires, then the node is considered offline.

We refer to nodes satisfying these two conditions as being “in sync” to avoid the vagueness of “alive” or “failed”. The leader keeps track of the set of “in sync” replicas, which is known as the ISR. If either of these conditions fail to be satisfied, then the broker will be removed from the ISR. For example, if a follower dies, then the controller will notice the failure through the loss of its session, and will remove the broker from the ISR. On the other hand, if the follower lags too far behind the leader but still has an active session, then the leader can also remove it from the ISR. The determination of lagging replicas is controlled through the replica.lag.time.max.ms configuration. Replicas that cannot catch up to the end of the log on the leader within the max time set by this configuration are removed from the ISR.

In distributed systems terminology we only attempt to handle a “fail/recover” model of failures where nodes suddenly cease working and then later recover (perhaps without knowing that they have died). Kafka does not handle so-called “Byzantine” failures in which nodes produce arbitrary or malicious responses (perhaps due to bugs or foul play).

We can now more precisely define that a message is considered committed when all replicas in the ISR for that partition have applied it to their log. Only committed messages are ever given out to the consumer. This means that the consumer need not worry about potentially seeing a message that could be lost if the leader fails. Producers, on the other hand, have the option of either waiting for the message to be committed or not, depending on their preference for tradeoff between latency and durability. This preference is controlled by the acks setting that the producer uses. Note that topics have a setting for the minimum number of in-sync replicas (min.insync.replicas) that is checked when the producer requests acknowledgment that a message has been written to the full set of in-sync replicas. If a less stringent acknowledgment is requested by the producer, then the message is committed asynchronously across the set of in-sync replicas if acks=0, or synchronously only on the leader if acks=1. Regardless of the acks setting, the messages will not be visible to the consumers until all the following conditions are met:

  1. The messages are replicated to all the in-sync replicas.
  2. The number of the in-sync replicas is no less than the min.insync.replicas setting.

The guarantee that Kafka offers is that a committed message will not be lost, as long as there is at least one in sync replica alive, at all times.

Kafka will remain available in the presence of node failures after a short fail-over period, but may not remain available in the presence of network partitions.

Replicated Logs: Quorums, ISRs, and State Machines (Oh my!)

At its heart a Kafka partition is a replicated log. The replicated log is one of the most basic primitives in distributed data systems, and there are many approaches for implementing one. A replicated log can be used by other systems as a primitive for implementing other distributed systems in the state-machine style.

A replicated log models the process of coming into consensus on the order of a series of values (generally numbering the log entries 0, 1, 2, …). There are many ways to implement this, but the simplest and fastest is with a leader who chooses the ordering of values provided to it. As long as the leader remains alive, all followers need to only copy the values and ordering the leader chooses.

Of course if leaders didn’t fail we wouldn’t need followers! When the leader does die we need to choose a new leader from among the followers. But followers themselves may fall behind or crash so we must ensure we choose an up-to-date follower. The fundamental guarantee a log replication algorithm must provide is that if we tell the client a message is committed, and the leader fails, the new leader we elect must also have that message. This yields a tradeoff: if the leader waits for more followers to acknowledge a message before declaring it committed then there will be more potentially electable leaders.

If you choose the number of acknowledgements required and the number of logs that must be compared to elect a leader such that there is guaranteed to be an overlap, then this is called a Quorum.

A common approach to this tradeoff is to use a majority vote for both the commit decision and the leader election. This is not what Kafka does, but let’s explore it anyway to understand the tradeoffs. Let’s say we have 2 f +1 replicas. If f +1 replicas must receive a message prior to a commit being declared by the leader, and if we elect a new leader by electing the follower with the most complete log from at least f +1 replicas, then, with no more than f failures, the leader is guaranteed to have all committed messages. This is because among any f +1 replicas, there must be at least one replica that contains all committed messages. That replica’s log will be the most complete and therefore will be selected as the new leader. There are many remaining details that each algorithm must handle (such as precisely defined what makes a log more complete, ensuring log consistency during leader failure or changing the set of servers in the replica set) but we will ignore these for now.

This majority vote approach has a very nice property: the latency is dependent on only the fastest servers. That is, if the replication factor is three, the latency is determined by the faster follower not the slower one.

There are a rich variety of algorithms in this family including ZooKeeper’s Zab, Raft, and Viewstamped Replication. The most similar academic publication we are aware of to Kafka’s actual implementation is PacificA from Microsoft.

The downside of majority vote is that it doesn’t take many failures to leave you with no electable leaders. To tolerate one failure requires three copies of the data, and to tolerate two failures requires five copies of the data. In our experience having only enough redundancy to tolerate a single failure is not enough for a practical system, but doing every write five times, with 5x the disk space requirements and 1/5th the throughput, is not very practical for large volume data problems. This is likely why quorum algorithms more commonly appear for shared cluster configuration such as ZooKeeper but are less common for primary data storage. For example in HDFS the namenode’s high-availability feature is built on a majority-vote-based journal, but this more expensive approach is not used for the data itself.

Kafka takes a slightly different approach to choosing its quorum set. Instead of majority vote, Kafka dynamically maintains a set of in-sync replicas (ISR) that are caught-up to the leader. Only members of this set are eligible for election as leader. A write to a Kafka partition is not considered committed until all in-sync replicas have received the write. This ISR set is persisted in the cluster metadata whenever it changes. Because of this, any replica in the ISR is eligible to be elected leader. This is an important factor for Kafka’s usage model where there are many partitions and ensuring leadership balance is important. With this ISR model and f+1 replicas, a Kafka topic can tolerate f failures without losing committed messages.

For most use cases we hope to handle, we think this tradeoff is a reasonable one. In practice, to tolerate f failures, both the majority vote and the ISR approach will wait for the same number of replicas to acknowledge before committing a message (e.g. to survive one failure a majority quorum needs three replicas and one acknowledgement and the ISR approach requires two replicas and one acknowledgement). The ability to commit without the slowest servers is an advantage of the majority vote approach. However, we think it is ameliorated by allowing the client to choose whether they block on the message commit or not, and the additional throughput and disk space due to the lower required replication factor is worth it.

Another important design distinction is that Kafka does not require that crashed nodes recover with all their data intact. It is not uncommon for replication algorithms in this space to depend on the existence of “stable storage” that cannot be lost in any failure-recovery scenario without potential consistency violations. There are two primary problems with this assumption. First, disk errors are the most common problem we observe in real operation of persistent data systems and they often do not leave data intact. Secondly, even if this were not a problem, we do not want to require the use of fsync on every write for our consistency guarantees as this can reduce performance by two to three orders of magnitude. Our protocol for allowing a replica to rejoin the ISR ensures that before rejoining, it must fully re-sync again even if it lost unflushed data in its crash.

Unclean leader election: What if they all die?

Note that Kafka’s guarantee with respect to data loss is predicated on at least one replica remaining in sync. If all the nodes replicating a partition die, this guarantee no longer holds.

However a practical system needs to do something reasonable when all the replicas die. If you are unlucky enough to have this occur, it is important to consider what will happen. There are two behaviors that could be implemented:

  1. Wait for a replica in the ISR to come back to life and choose this replica as the leader (hopefully it still has all its data).
  2. Choose the first replica (not necessarily in the ISR) that comes back to life as the leader.

This is a simple tradeoff between availability and consistency. If we wait for replicas in the ISR, then we will remain unavailable as long as those replicas are down. If such replicas were destroyed or their data was lost, then we are permanently down. If, on the other hand, a non-in-sync replica comes back to life and we allow it to become leader, then its log becomes the source of truth even though it is not guaranteed to have every committed message. By default from version 0.11.0.0, Kafka chooses the first strategy and favor waiting for a consistent replica. This behavior can be changed using configuration property unclean.leader.election.enable, to support use cases where uptime is preferable to consistency.

This dilemma is not specific to Kafka. It exists in any quorum-based scheme. For example in a majority voting scheme, if a majority of servers suffer a permanent failure, then you must either choose to lose 100% of your data or violate consistency by taking what remains on an existing server as your new source of truth.

Availability and Durability Guarantees

When writing to Kafka, producers can choose whether they wait for the message to be acknowledged by 0,1 or all (-1) replicas. Note that “acknowledgement by all replicas” does not guarantee that the full set of assigned replicas have received the message. By default, when acks=all, acknowledgement happens as soon as all the current in-sync replicas have received the message. For example, if a topic is configured with only two replicas and one fails (i.e., only one in sync replica remains), then writes that specify acks=all will succeed. However, these writes could be lost if the remaining replica also fails. Although this ensures maximum availability of the partition, this behavior may be undesirable to some users who prefer durability over availability. Therefore, we provide two topic-level configurations that can be used to prefer message durability over availability:

  1. Disable unclean leader election - if all replicas become unavailable, then the partition will remain unavailable until the most recent leader becomes available again. This effectively prefers unavailability over the risk of message loss. See the previous section on Unclean Leader Election for clarification.
  2. Specify a minimum ISR size - the partition will only accept writes if the size of the ISR is above a certain minimum, in order to prevent the loss of messages that were written to just a single replica, which subsequently becomes unavailable. This setting only takes effect if the producer uses acks=all and guarantees that the message will be acknowledged by at least this many in-sync replicas. This setting offers a trade-off between consistency and availability. A higher setting for minimum ISR size guarantees better consistency since the message is guaranteed to be written to more replicas which reduces the probability that it will be lost. However, it reduces availability since the partition will be unavailable for writes if the number of in-sync replicas drops below the minimum threshold.

Replica Management

The above discussion on replicated logs really covers only a single log, i.e. one topic partition. However a Kafka cluster will manage hundreds or thousands of these partitions. We attempt to balance partitions within a cluster in a round-robin fashion to avoid clustering all partitions for high-volume topics on a small number of nodes. Likewise we try to balance leadership so that each node is the leader for a proportional share of its partitions.

It is also important to optimize the leadership election process as that is the critical window of unavailability. A naive implementation of leader election would end up running an election per partition for all partitions a node hosted when that node failed. As discussed above in the section on replication, Kafka clusters have a special role known as the “controller” which is responsible for managing the registration of brokers. If the controller detects the failure of a broker, it is responsible for electing one of the remaining members of the ISR to serve as the new leader. The result is that we are able to batch together many of the required leadership change notifications which makes the election process far cheaper and faster for a large number of partitions. If the controller itself fails, then another controller will be elected.

Log Compaction

Log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition. It addresses use cases and scenarios such as restoring state after application crashes or system failure, or reloading caches after application restarts during operational maintenance. Let’s dive into these use cases in more detail and then describe how compaction works.

So far we have described only the simpler approach to data retention where old log data is discarded after a fixed period of time or when the log reaches some predetermined size. This works well for temporal event data such as logging where each record stands alone. However an important class of data streams are the log of changes to keyed, mutable data (for example, the changes to a database table).

Let’s discuss a concrete example of such a stream. Say we have a topic containing user email addresses; every time a user updates their email address we send a message to this topic using their user id as the primary key. Now say we send the following messages over some time period for a user with id 123, each message corresponding to a change in email address (messages for other ids are omitted):

123 => bill@microsoft.com
        .
        .
        .
123 => bill@gatesfoundation.org
        .
        .
        .
123 => bill@gmail.com

Log compaction gives us a more granular retention mechanism so that we are guaranteed to retain at least the last update for each primary key (e.g. bill@gmail.com). By doing this we guarantee that the log contains a full snapshot of the final value for every key not just keys that changed recently. This means downstream consumers can restore their own state off this topic without us having to retain a complete log of all changes.

Let’s start by looking at a few use cases where this is useful, then we’ll see how it can be used.

  1. Database change subscription. It is often necessary to have a data set in multiple data systems, and often one of these systems is a database of some kind (either a RDBMS or perhaps a new-fangled key-value store). For example you might have a database, a cache, a search cluster, and a Hadoop cluster. Each change to the database will need to be reflected in the cache, the search cluster, and eventually in Hadoop. In the case that one is only handling the real-time updates you only need recent log. But if you want to be able to reload the cache or restore a failed search node you may need a complete data set.
  2. Event sourcing. This is a style of application design which co-locates query processing with application design and uses a log of changes as the primary store for the application.
  3. Journaling for high-availability. A process that does local computation can be made fault-tolerant by logging out changes that it makes to its local state so another process can reload these changes and carry on if it should fail. A concrete example of this is handling counts, aggregations, and other “group by”-like processing in a stream query system. Samza, a real-time stream-processing framework, uses this feature for exactly this purpose. In each of these cases one needs primarily to handle the real-time feed of changes, but occasionally, when a machine crashes or data needs to be re-loaded or re-processed, one needs to do a full load. Log compaction allows feeding both of these use cases off the same backing topic. This style of usage of a log is described in more detail in this blog post.

The general idea is quite simple. If we had infinite log retention, and we logged each change in the above cases, then we would have captured the state of the system at each time from when it first began. Using this complete log, we could restore to any point in time by replaying the first N records in the log. This hypothetical complete log is not very practical for systems that update a single record many times as the log will grow without bound even for a stable dataset. The simple log retention mechanism which throws away old updates will bound space but the log is no longer a way to restore the current state–now restoring from the beginning of the log no longer recreates the current state as old updates may not be captured at all.

Log compaction is a mechanism to give finer-grained per-record retention, rather than the coarser-grained time-based retention. The idea is to selectively remove records where we have a more recent update with the same primary key. This way the log is guaranteed to have at least the last state for each key.

This retention policy can be set per-topic, so a single cluster can have some topics where retention is enforced by size or time and other topics where retention is enforced by compaction.

This functionality is inspired by one of LinkedIn’s oldest and most successful pieces of infrastructure–a database changelog caching service called Databus. Unlike most log-structured storage systems Kafka is built for subscription and organizes data for fast linear reads and writes. Unlike Databus, Kafka acts as a source-of-truth store so it is useful even in situations where the upstream data source would not otherwise be replayable.

Log Compaction Basics

Here is a high-level picture that shows the logical structure of a Kafka log with the offset for each message.

The head of the log is identical to a traditional Kafka log. It has dense, sequential offsets and retains all messages. Log compaction adds an option for handling the tail of the log. The picture above shows a log with a compacted tail. Note that the messages in the tail of the log retain the original offset assigned when they were first written–that never changes. Note also that all offsets remain valid positions in the log, even if the message with that offset has been compacted away; in this case this position is indistinguishable from the next highest offset that does appear in the log. For example, in the picture above the offsets 36, 37, and 38 are all equivalent positions and a read beginning at any of these offsets would return a message set beginning with 38.

Compaction also allows for deletes. A message with a key and a null payload will be treated as a delete from the log. Such a record is sometimes referred to as a tombstone. This delete marker will cause any prior message with that key to be removed (as would any new message with that key), but delete markers are special in that they will themselves be cleaned out of the log after a period of time to free up space. The point in time at which deletes are no longer retained is marked as the “delete retention point” in the above diagram.

The compaction is done in the background by periodically recopying log segments. Cleaning does not block reads and can be throttled to use no more than a configurable amount of I/O throughput to avoid impacting producers and consumers. The actual process of compacting a log segment looks something like this:

What guarantees does log compaction provide?

Log compaction guarantees the following:

  1. Any consumer that stays caught-up to within the head of the log will see every message that is written; these messages will have sequential offsets. The topic’s min.compaction.lag.ms can be used to guarantee the minimum length of time must pass after a message is written before it could be compacted. I.e. it provides a lower bound on how long each message will remain in the (uncompacted) head. The topic’s max.compaction.lag.ms can be used to guarantee the maximum delay between the time a message is written and the time the message becomes eligible for compaction.
  2. Ordering of messages is always maintained. Compaction will never re-order messages, just remove some.
  3. The offset for a message never changes. It is the permanent identifier for a position in the log.
  4. Any consumer progressing from the start of the log will see at least the final state of all records in the order they were written. Additionally, all delete markers for deleted records will be seen, provided the consumer reaches the head of the log in a time period less than the topic’s delete.retention.ms setting (the default is 24 hours). In other words: since the removal of delete markers happens concurrently with reads, it is possible for a consumer to miss delete markers if it lags by more than delete.retention.ms.

Log Compaction Details

Log compaction is handled by the log cleaner, a pool of background threads that recopy log segment files, removing records whose key appears in the head of the log. Each compactor thread works as follows:

  1. It chooses the log that has the highest ratio of log head to log tail
  2. It creates a succinct summary of the last offset for each key in the head of the log
  3. It recopies the log from beginning to end removing keys which have a later occurrence in the log. New, clean segments are swapped into the log immediately so the additional disk space required is just one additional log segment (not a fully copy of the log).
  4. The summary of the log head is essentially just a space-compact hash table. It uses exactly 24 bytes per entry. As a result with 8GB of cleaner buffer one cleaner iteration can clean around 366GB of log head (assuming 1k messages).

Configuring The Log Cleaner

The log cleaner is enabled by default. This will start the pool of cleaner threads. To enable log cleaning on a particular topic, add the log-specific property

log.cleanup.policy=compact

The log.cleanup.policy property is a broker configuration setting defined in the broker’s server.properties file; it affects all of the topics in the cluster that do not have a configuration override in place as documented here. The log cleaner can be configured to retain a minimum amount of the uncompacted “head” of the log. This is enabled by setting the compaction time lag.

log.cleaner.min.compaction.lag.ms

This can be used to prevent messages newer than a minimum message age from being subject to compaction. If not set, all log segments are eligible for compaction except for the last segment, i.e. the one currently being written to. The active segment will not be compacted even if all of its messages are older than the minimum compaction time lag. The log cleaner can be configured to ensure a maximum delay after which the uncompacted “head” of the log becomes eligible for log compaction.

log.cleaner.max.compaction.lag.ms

This can be used to prevent log with low produce rate from remaining ineligible for compaction for an unbounded duration. If not set, logs that do not exceed min.cleanable.dirty.ratio are not compacted. Note that this compaction deadline is not a hard guarantee since it is still subjected to the availability of log cleaner threads and the actual compaction time. You will want to monitor the uncleanable-partitions-count, max-clean-time-secs and max-compaction-delay-secs metrics.

Further cleaner configurations are described here.

Quotas

Kafka cluster has the ability to enforce quotas on requests to control the broker resources used by clients. Two types of client quotas can be enforced by Kafka brokers for each group of clients sharing a quota:

  1. Network bandwidth quotas define byte-rate thresholds (since 0.9)
  2. Request rate quotas define CPU utilization thresholds as a percentage of network and I/O threads (since 0.11)

Why are quotas necessary?

It is possible for producers and consumers to produce/consume very high volumes of data or generate requests at a very high rate and thus monopolize broker resources, cause network saturation and generally DOS other clients and the brokers themselves. Having quotas protects against these issues and is all the more important in large multi-tenant clusters where a small set of badly behaved clients can degrade user experience for the well behaved ones. In fact, when running Kafka as a service this even makes it possible to enforce API limits according to an agreed upon contract.

Client groups

The identity of Kafka clients is the user principal which represents an authenticated user in a secure cluster. In a cluster that supports unauthenticated clients, user principal is a grouping of unauthenticated users chosen by the broker using a configurable PrincipalBuilder. Client-id is a logical grouping of clients with a meaningful name chosen by the client application. The tuple (user, client-id) defines a secure logical group of clients that share both user principal and client-id.

Quotas can be applied to (user, client-id), user or client-id groups. For a given connection, the most specific quota matching the connection is applied. All connections of a quota group share the quota configured for the group. For example, if (user=“test-user”, client-id=“test-client”) has a produce quota of 10MB/sec, this is shared across all producer instances of user “test-user” with the client-id “test-client”.

Quota Configuration

Quota configuration may be defined for (user, client-id), user and client-id groups. It is possible to override the default quota at any of the quota levels that needs a higher (or even lower) quota. The mechanism is similar to the per-topic log config overrides. User and (user, client-id) quota overrides are written to the metadata log. These overrides are read by all brokers and are effective immediately. This lets us change quotas without having to do a rolling restart of the entire cluster. See here for details. Default quotas for each group may also be updated dynamically using the same mechanism.

The order of precedence for quota configuration is:

  1. matching user and client-id quotas
  2. matching user and default client-id quotas
  3. matching user quota
  4. default user and matching client-id quotas
  5. default user and default client-id quotas
  6. default user quota
  7. matching client-id quota
  8. default client-id quota

Network Bandwidth Quotas

Network bandwidth quotas are defined as the byte rate threshold for each group of clients sharing a quota. By default, each unique client group receives a fixed quota in bytes/sec as configured by the cluster. This quota is defined on a per-broker basis. Each group of clients can publish/fetch a maximum of X bytes/sec per broker before clients are throttled.

Request Rate Quotas

Request rate quotas are defined as the percentage of time a client can utilize on request handler I/O threads and network threads of each broker within a quota window. A quota of n% represents n% of one thread, so the quota is out of a total capacity of ((num.io.threads + num.network.threads) * 100)%. Each group of clients may use a total percentage of upto n% across all I/O and network threads in a quota window before being throttled. Since the number of threads allocated for I/O and network threads are typically based on the number of cores available on the broker host, request rate quotas represent the total percentage of CPU that may be used by each group of clients sharing the quota.

Enforcement

By default, each unique client group receives a fixed quota as configured by the cluster. This quota is defined on a per-broker basis. Each client can utilize this quota per broker before it gets throttled. We decided that defining these quotas per broker is much better than having a fixed cluster wide bandwidth per client because that would require a mechanism to share client quota usage among all the brokers. This can be harder to get right than the quota implementation itself!

How does a broker react when it detects a quota violation? In our solution, the broker first computes the amount of delay needed to bring the violating client under its quota and returns a response with the delay immediately. In case of a fetch request, the response will not contain any data. Then, the broker mutes the channel to the client, not to process requests from the client anymore, until the delay is over. Upon receiving a response with a non-zero delay duration, the Kafka client will also refrain from sending further requests to the broker during the delay. Therefore, requests from a throttled client are effectively blocked from both sides. Even with older client implementations that do not respect the delay response from the broker, the back pressure applied by the broker via muting its socket channel can still handle the throttling of badly behaving clients. Those clients who sent further requests to the throttled channel will receive responses only after the delay is over.

Byte-rate and thread utilization are measured over multiple small windows (e.g. 30 windows of 1 second each) in order to detect and correct quota violations quickly. Typically, having large measurement windows (for e.g. 10 windows of 30 seconds each) leads to large bursts of traffic followed by long delays which is not great in terms of user experience.

4.2 - Protocol

Kafka protocol guide

This document covers the wire protocol implemented in Kafka. It is meant to give a readable guide to the protocol that covers the available requests, their binary format, and the proper way to make use of them to implement a client. This document assumes you understand the basic design and terminology described here

  • Preliminaries
    • Network
    • Partitioning and bootstrapping
    • Partitioning Strategies
    • Batching
    • Versioning and Compatibility
    • Retrieving Supported API versions
    • SASL Authentication Sequence
  • The Protocol
    • Protocol Primitive Types
    • Notes on reading the request format grammars
    • Common Request and Response Structure
    • Request and Response Headers
    • Record Batch
  • Constants
    • Error Codes
    • Api Keys
  • The Messages
  • Some Common Philosophical Questions

Preliminaries

Network

Kafka uses a binary protocol over TCP. The protocol defines all APIs as request response message pairs. All messages are size delimited and are made up of the following primitive types.

The client initiates a socket connection and then writes a sequence of request messages and reads back the corresponding response message. No handshake is required on connection or disconnection. TCP is happier if you maintain persistent connections used for many requests to amortize the cost of the TCP handshake, but beyond this penalty connecting is pretty cheap.

The client will likely need to maintain a connection to multiple brokers, as data is partitioned and the clients will need to talk to the server that has their data. However it should not generally be necessary to maintain multiple connections to a single broker from a single client instance (i.e. connection pooling).

The server guarantees that on a single TCP connection, requests will be processed in the order they are sent and responses will return in that order as well. The broker’s request processing allows only a single in-flight request per connection in order to guarantee this ordering. Note that clients can (and ideally should) use non-blocking IO to implement request pipelining and achieve higher throughput. i.e., clients can send requests even while awaiting responses for preceding requests since the outstanding requests will be buffered in the underlying OS socket buffer. All requests are initiated by the client, and result in a corresponding response message from the server except where noted.

The server has a configurable maximum limit on request size and any request that exceeds this limit will result in the socket being disconnected.

Partitioning and bootstrapping

Kafka is a partitioned system so not all servers have the complete data set. Instead recall that topics are split into a pre-defined number of partitions, P, and each partition is replicated with some replication factor, N. Topic partitions themselves are just ordered “commit logs” numbered 0, 1, …, P-1.

All systems of this nature have the question of how a particular piece of data is assigned to a particular partition. Kafka clients directly control this assignment, the brokers themselves enforce no particular semantics of which messages should be published to a particular partition. Rather, to publish messages the client directly addresses messages to a particular partition, and when fetching messages, fetches from a particular partition. If two clients want to use the same partitioning scheme they must use the same method to compute the mapping of key to partition.

These requests to publish or fetch data must be sent to the broker that is currently acting as the leader for a given partition. This condition is enforced by the broker, so a request for a particular partition to the wrong broker will result in an the NotLeaderForPartition error code (described below).

How can the client find out which topics exist, what partitions they have, and which brokers currently host those partitions so that it can direct its requests to the right hosts? This information is dynamic, so you can’t just configure each client with some static mapping file. Instead all Kafka brokers can answer a metadata request that describes the current state of the cluster: what topics there are, which partitions those topics have, which broker is the leader for those partitions, and the host and port information for these brokers.

In other words, the client needs to somehow find one broker and that broker will tell the client about all the other brokers that exist and what partitions they host. This first broker may itself go down so the best practice for a client implementation is to take a list of two or three URLs to bootstrap from. The user can then choose to use a load balancer or just statically configure two or three of their Kafka hosts in the clients.

The client does not need to keep polling to see if the cluster has changed; it can fetch metadata once when it is instantiated cache that metadata until it receives an error indicating that the metadata is out of date. This error can come in two forms: (1) a socket error indicating the client cannot communicate with a particular broker, (2) an error code in the response to a request indicating that this broker no longer hosts the partition for which data was requested.

  1. Cycle through a list of “bootstrap” Kafka URLs until we find one we can connect to. Fetch cluster metadata.
  2. Process fetch or produce requests, directing them to the appropriate broker based on the topic/partitions they send to or fetch from.
  3. If we get an appropriate error, refresh the metadata and try again.

Partitioning Strategies

As mentioned above the assignment of messages to partitions is something the producing client controls. That said, how should this functionality be exposed to the end-user?

Partitioning really serves two purposes in Kafka:

  1. It balances data and request load over brokers
  2. It serves as a way to divvy up processing among consumer processes while allowing local state and preserving order within the partition. We call this semantic partitioning.

For a given use case you may care about only one of these or both.

To accomplish simple load balancing a simple approach would be for the client to just round robin requests over all brokers. Another alternative, in an environment where there are many more producers than brokers, would be to have each client chose a single partition at random and publish to that. This later strategy will result in far fewer TCP connections.

Semantic partitioning means using some key in the message to assign messages to partitions. For example if you were processing a click message stream you might want to partition the stream by the user id so that all data for a particular user would go to a single consumer. To accomplish this the client can take a key associated with the message and use some hash of this key to choose the partition to which to deliver the message.

Batching

Our APIs encourage batching small things together for efficiency. We have found this is a very significant performance win. Both our API to send messages and our API to fetch messages always work with a sequence of messages not a single message to encourage this. A clever client can make use of this and support an “asynchronous” mode in which it batches together messages sent individually and sends them in larger clumps. We go even further with this and allow the batching across multiple topics and partitions, so a produce request may contain data to append to many partitions and a fetch request may pull data from many partitions all at once.

The client implementer can choose to ignore this and send everything one at a time if they like.

Compatibility

Kafka has a “bidirectional” client compatibility policy. In other words, new clients can talk to old servers, and old clients can talk to new servers. This allows users to upgrade either clients or servers without experiencing any downtime.

Since the Kafka protocol has changed over time, clients and servers need to agree on the schema of the message that they are sending over the wire. This is done through API versioning.

Before each request is sent, the client sends the API key and the API version. These two 16-bit numbers, when taken together, uniquely identify the schema of the message to follow.

The intention is that clients will support a range of API versions. When communicating with a particular broker, a given client should use the highest API version supported by both and indicate this version in their requests.

The server will reject requests with a version it does not support, and will always respond to the client with exactly the protocol format it expects based on the version it included in its request. The intended upgrade path is that new features would first be rolled out on the server (with the older clients not making use of them) and then as newer clients are deployed these new features would gradually be taken advantage of. Note there is an exceptional case while retrieving supported API versions where the server can respond with a different version.

Note that KIP-482 tagged fields can be added to a request without incrementing the version number. This offers an additional way of evolving the message schema without breaking compatibility. Tagged fields do not take up any space when the field is not set. Therefore, if a field is rarely used, it is more efficient to make it a tagged field than to put it in the mandatory schema. However, tagged fields are ignored by recipients that don’t know about them, which could pose a challenge if this is not the behavior that the sender wants. In such cases, a version bump may be more appropriate.

Retrieving Supported API versions

In order to work against multiple broker versions, clients need to know what versions of various APIs a broker supports. The broker exposes this information since 0.10.0.0 as described in KIP-35. Clients should use the supported API versions information to choose the highest API version supported by both client and broker. If no such version exists, an error should be reported to the user.

The following sequence may be used by a client to obtain supported API versions from a broker.

  1. Client sends ApiVersionsRequest to a broker after connection has been established with the broker. If SSL is enabled, this happens after SSL connection has been established.
  2. On receiving ApiVersionsRequest, a broker returns its full list of supported ApiKeys and versions regardless of current authentication state (e.g., before SASL authentication on an SASL listener, do note that no Kafka protocol requests may take place on an SSL listener before the SSL handshake is finished). If this is considered to leak information about the broker version a workaround is to use SSL with client authentication which is performed at an earlier stage of the connection where the ApiVersionRequest is not available. Also, note that broker versions older than 0.10.0.0 do not support this API and will either ignore the request or close connection in response to the request. Also note that if the client ApiVersionsRequest version is unsupported by the broker (client is ahead), and the broker version is 2.4.0 or greater, then the broker will respond with a version 0 ApiVersionsResponse with the error code set to UNSUPPORTED_VERSION and the api_versions field populated with the supported version of the ApiVersionsRequest. It is then up to the client to retry, making another ApiVersionsRequest using the highest version supported by the client and broker. See KIP-511: Collect and Expose Client’s Name and Version in the Brokers
  3. If multiple versions of an API are supported by broker and client, clients are recommended to use the latest version supported by the broker and itself.
  4. Deprecation of a protocol version is done by marking an API version as deprecated in the protocol documentation.
  5. Supported API versions obtained from a broker are only valid for the connection on which that information is obtained. In the event of disconnection, the client should obtain the information from the broker again, as the broker might have been upgraded/downgraded in the mean time.

SASL Authentication Sequence

The following sequence is used for SASL authentication:

  1. Kafka ApiVersionsRequest may be sent by the client to obtain the version ranges of requests supported by the broker. This is optional.
  2. Kafka SaslHandshakeRequest containing the SASL mechanism for authentication is sent by the client. If the requested mechanism is not enabled in the server, the server responds with the list of supported mechanisms and closes the client connection. If the mechanism is enabled in the server, the server sends a successful response and continues with SASL authentication.
  3. The actual SASL authentication is now performed. If SaslHandshakeRequest version is v0, a series of SASL client and server tokens corresponding to the mechanism are sent as opaque packets without wrapping the messages with Kafka protocol headers. If SaslHandshakeRequest version is v1, the SaslAuthenticate request/response are used, where the actual SASL tokens are wrapped in the Kafka protocol. The error code in the final message from the broker will indicate if authentication succeeded or failed.
  4. If authentication succeeds, subsequent packets are handled as Kafka API requests. Otherwise, the client connection is closed.

For interoperability with 0.9.0.x clients, the first packet received by the server is handled as a SASL/GSSAPI client token if it is not a valid Kafka request. SASL/GSSAPI authentication is performed starting with this packet, skipping the first two steps above.

The Protocol

Protocol Primitive Types

The protocol is built out of the following primitive types.

TypeDescription
BOOLEANRepresents a boolean value in a byte. Values 0 and 1 are used to represent false and true respectively. When reading a boolean value, any non-zero value is considered true.
INT8Represents an integer between -27 and 27-1 inclusive.
INT16Represents an integer between -215 and 215-1 inclusive. The values are encoded using two bytes in network byte order (big-endian).
INT32Represents an integer between -231 and 231-1 inclusive. The values are encoded using four bytes in network byte order (big-endian).
INT64Represents an integer between -263 and 263-1 inclusive. The values are encoded using eight bytes in network byte order (big-endian).
UINT16Represents an integer between 0 and 65535 inclusive. The values are encoded using two bytes in network byte order (big-endian).
UINT32Represents an integer between 0 and 232-1 inclusive. The values are encoded using four bytes in network byte order (big-endian).
VARINTRepresents an integer between -231 and 231-1 inclusive. Encoding follows the variable-length zig-zag encoding from Google Protocol Buffers.
VARLONGRepresents an integer between -263 and 263-1 inclusive. Encoding follows the variable-length zig-zag encoding from Google Protocol Buffers.
UUIDRepresents a type 4 immutable universally unique identifier (Uuid). The values are encoded using sixteen bytes in network byte order (big-endian).
FLOAT64Represents a double-precision 64-bit format IEEE 754 value. The values are encoded using eight bytes in network byte order (big-endian).
STRINGRepresents a sequence of characters. First the length N is given as an INT16. Then N bytes follow which are the UTF-8 encoding of the character sequence. Length must not be negative.
COMPACT_STRINGRepresents a sequence of characters. First the length N + 1 is given as an UNSIGNED_VARINT . Then N bytes follow which are the UTF-8 encoding of the character sequence.
NULLABLE_STRINGRepresents a sequence of characters or null. For non-null strings, first the length N is given as an INT16. Then N bytes follow which are the UTF-8 encoding of the character sequence. A null value is encoded with length of -1 and there are no following bytes.
COMPACT_NULLABLE_STRINGRepresents a sequence of characters. First the length N + 1 is given as an UNSIGNED_VARINT . Then N bytes follow which are the UTF-8 encoding of the character sequence. A null string is represented with a length of 0.
BYTESRepresents a raw sequence of bytes. First the length N is given as an INT32. Then N bytes follow.
COMPACT_BYTESRepresents a raw sequence of bytes. First the length N+1 is given as an UNSIGNED_VARINT.Then N bytes follow.
NULLABLE_BYTESRepresents a raw sequence of bytes or null. For non-null values, first the length N is given as an INT32. Then N bytes follow. A null value is encoded with length of -1 and there are no following bytes.
COMPACT_NULLABLE_BYTESRepresents a raw sequence of bytes. First the length N+1 is given as an UNSIGNED_VARINT.Then N bytes follow. A null object is represented with a length of 0.
RECORDSRepresents a sequence of Kafka records as NULLABLE_BYTES. For a detailed description of records see Message Sets.
COMPACT_RECORDSRepresents a sequence of Kafka records as COMPACT_NULLABLE_BYTES. For a detailed description of records see Message Sets.
ARRAYRepresents a sequence of objects of a given type T. Type T can be either a primitive type (e.g. STRING) or a structure. First, the length N is given as an INT32. Then N instances of type T follow. A null array is represented with a length of -1. In protocol documentation an array of T instances is referred to as [T].
COMPACT_ARRAYRepresents a sequence of objects of a given type T. Type T can be either a primitive type (e.g. STRING) or a structure. First, the length N + 1 is given as an UNSIGNED_VARINT. Then N instances of type T follow. A null array is represented with a length of 0. In protocol documentation an array of T instances is referred to as [T].

Notes on reading the request format grammars

The BNFs below give an exact context free grammar for the request and response binary format. The BNF is intentionally not compact in order to give human-readable name. As always in a BNF a sequence of productions indicates concatenation. When there are multiple possible productions these are separated with ‘|’ and may be enclosed in parenthesis for grouping. The top-level definition is always given first and subsequent sub-parts are indented.

Common Request and Response Structure

All requests and responses originate from the following grammar which will be incrementally describe through the rest of this document:

RequestOrResponse => Size (RequestMessage | ResponseMessage)
  Size => int32
FieldDescription
message_sizeThe message_size field gives the size of the subsequent request or response message in bytes. The client can read requests by first reading this 4 byte size as an integer N, and then reading and parsing the subsequent N bytes of the request.

Request and Response Headers

Different request and response versions require different versions of the corresponding headers. These header versions are specified below together with API message descriptions.

Record Batch

A description of the record batch format can be found here.

Constants

Error Codes

We use numeric codes to indicate what problem occurred on the server. These can be translated by the client into exceptions or whatever the appropriate error handling mechanism in the client language. Here is a table of the error codes currently in use:

ErrorCodeRetriableDescription
UNKNOWN_SERVER_ERROR-1FalseThe server experienced an unexpected error when processing the request.
NONE0False
OFFSET_OUT_OF_RANGE1FalseThe requested offset is not within the range of offsets maintained by the server.
CORRUPT_MESSAGE2TrueThis message has failed its CRC checksum, exceeds the valid size, has a null key for a compacted topic, or is otherwise corrupt.
UNKNOWN_TOPIC_OR_PARTITION3TrueThis server does not host this topic-partition.
INVALID_FETCH_SIZE4FalseThe requested fetch size is invalid.
LEADER_NOT_AVAILABLE5TrueThere is no leader for this topic-partition as we are in the middle of a leadership election.
NOT_LEADER_OR_FOLLOWER6TrueFor requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.
REQUEST_TIMED_OUT7TrueThe request timed out.
BROKER_NOT_AVAILABLE8FalseThe broker is not available.
REPLICA_NOT_AVAILABLE9TrueThe replica is not available for the requested topic-partition. Produce/Fetch requests and other requests intended only for the leader or follower return NOT_LEADER_OR_FOLLOWER if the broker is not a replica of the topic-partition.
MESSAGE_TOO_LARGE10FalseThe request included a message larger than the max message size the server will accept.
STALE_CONTROLLER_EPOCH11FalseThe controller moved to another broker.
OFFSET_METADATA_TOO_LARGE12FalseThe metadata field of the offset request was too large.
NETWORK_EXCEPTION13TrueThe server disconnected before a response was received.
COORDINATOR_LOAD_IN_PROGRESS14TrueThe coordinator is loading and hence can't process requests.
COORDINATOR_NOT_AVAILABLE15TrueThe coordinator is not available.
NOT_COORDINATOR16TrueThis is not the correct coordinator.
INVALID_TOPIC_EXCEPTION17FalseThe request attempted to perform an operation on an invalid topic.
RECORD_LIST_TOO_LARGE18FalseThe request included message batch larger than the configured segment size on the server.
NOT_ENOUGH_REPLICAS19TrueMessages are rejected since there are fewer in-sync replicas than required.
NOT_ENOUGH_REPLICAS_AFTER_APPEND20TrueMessages are written to the log, but to fewer in-sync replicas than required.
INVALID_REQUIRED_ACKS21FalseProduce request specified an invalid value for required acks.
ILLEGAL_GENERATION22FalseSpecified group generation id is not valid.
INCONSISTENT_GROUP_PROTOCOL23FalseThe group member's supported protocols are incompatible with those of existing members or first group member tried to join with empty protocol type or empty protocol list.
INVALID_GROUP_ID24FalseThe configured groupId is invalid.
UNKNOWN_MEMBER_ID25FalseThe coordinator is not aware of this member.
INVALID_SESSION_TIMEOUT26FalseThe session timeout is not within the range allowed by the broker (as configured by group.min.session.timeout.ms and group.max.session.timeout.ms).
REBALANCE_IN_PROGRESS27FalseThe group is rebalancing, so a rejoin is needed.
INVALID_COMMIT_OFFSET_SIZE28FalseThe committing offset data size is not valid.
TOPIC_AUTHORIZATION_FAILED29FalseTopic authorization failed.
GROUP_AUTHORIZATION_FAILED30FalseGroup authorization failed.
CLUSTER_AUTHORIZATION_FAILED31FalseCluster authorization failed.
INVALID_TIMESTAMP32FalseThe timestamp of the message is out of acceptable range.
UNSUPPORTED_SASL_MECHANISM33FalseThe broker does not support the requested SASL mechanism.
ILLEGAL_SASL_STATE34FalseRequest is not valid given the current SASL state.
UNSUPPORTED_VERSION35FalseThe version of API is not supported.
TOPIC_ALREADY_EXISTS36FalseTopic with this name already exists.
INVALID_PARTITIONS37FalseNumber of partitions is below 1.
INVALID_REPLICATION_FACTOR38FalseReplication factor is below 1 or larger than the number of available brokers.
INVALID_REPLICA_ASSIGNMENT39FalseReplica assignment is invalid.
INVALID_CONFIG40FalseConfiguration is invalid.
NOT_CONTROLLER41TrueThis is not the correct controller for this cluster.
INVALID_REQUEST42FalseThis most likely occurs because of a request being malformed by the client library or the message was sent to an incompatible broker. See the broker logs for more details.
UNSUPPORTED_FOR_MESSAGE_FORMAT43FalseThe message format version on the broker does not support the request.
POLICY_VIOLATION44FalseRequest parameters do not satisfy the configured policy.
OUT_OF_ORDER_SEQUENCE_NUMBER45FalseThe broker received an out of order sequence number.
DUPLICATE_SEQUENCE_NUMBER46FalseThe broker received a duplicate sequence number.
INVALID_PRODUCER_EPOCH47FalseProducer attempted to produce with an old epoch.
INVALID_TXN_STATE48FalseThe producer attempted a transactional operation in an invalid state.
INVALID_PRODUCER_ID_MAPPING49FalseThe producer attempted to use a producer id which is not currently assigned to its transactional id.
INVALID_TRANSACTION_TIMEOUT50FalseThe transaction timeout is larger than the maximum value allowed by the broker (as configured by transaction.max.timeout.ms).
CONCURRENT_TRANSACTIONS51TrueThe producer attempted to update a transaction while another concurrent operation on the same transaction was ongoing.
TRANSACTION_COORDINATOR_FENCED52FalseIndicates that the transaction coordinator sending a WriteTxnMarker is no longer the current coordinator for a given producer.
TRANSACTIONAL_ID_AUTHORIZATION_FAILED53FalseTransactional Id authorization failed.
SECURITY_DISABLED54FalseSecurity features are disabled.
OPERATION_NOT_ATTEMPTED55FalseThe broker did not attempt to execute this operation. This may happen for batched RPCs where some operations in the batch failed, causing the broker to respond without trying the rest.
KAFKA_STORAGE_ERROR56TrueDisk error when trying to access log file on the disk.
LOG_DIR_NOT_FOUND57FalseThe user-specified log directory is not found in the broker config.
SASL_AUTHENTICATION_FAILED58FalseSASL Authentication failed.
UNKNOWN_PRODUCER_ID59FalseThis exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producerId are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception.
REASSIGNMENT_IN_PROGRESS60FalseA partition reassignment is in progress.
DELEGATION_TOKEN_AUTH_DISABLED61FalseDelegation Token feature is not enabled.
DELEGATION_TOKEN_NOT_FOUND62FalseDelegation Token is not found on server.
DELEGATION_TOKEN_OWNER_MISMATCH63FalseSpecified Principal is not valid Owner/Renewer.
DELEGATION_TOKEN_REQUEST_NOT_ALLOWED64FalseDelegation Token requests are not allowed on PLAINTEXT/1-way SSL channels and on delegation token authenticated channels.
DELEGATION_TOKEN_AUTHORIZATION_FAILED65FalseDelegation Token authorization failed.
DELEGATION_TOKEN_EXPIRED66FalseDelegation Token is expired.
INVALID_PRINCIPAL_TYPE67FalseSupplied principalType is not supported.
NON_EMPTY_GROUP68FalseThe group is not empty.
GROUP_ID_NOT_FOUND69FalseThe group id does not exist.
FETCH_SESSION_ID_NOT_FOUND70TrueThe fetch session ID was not found.
INVALID_FETCH_SESSION_EPOCH71TrueThe fetch session epoch is invalid.
LISTENER_NOT_FOUND72TrueThere is no listener on the leader broker that matches the listener on which metadata request was processed.
TOPIC_DELETION_DISABLED73FalseTopic deletion is disabled.
FENCED_LEADER_EPOCH74TrueThe leader epoch in the request is older than the epoch on the broker.
UNKNOWN_LEADER_EPOCH75TrueThe leader epoch in the request is newer than the epoch on the broker.
UNSUPPORTED_COMPRESSION_TYPE76FalseThe requesting client does not support the compression type of given partition.
STALE_BROKER_EPOCH77FalseBroker epoch has changed.
OFFSET_NOT_AVAILABLE78TrueThe leader high watermark has not caught up from a recent leader election so the offsets cannot be guaranteed to be monotonically increasing.
MEMBER_ID_REQUIRED79FalseThe group member needs to have a valid member id before actually entering a consumer group.
PREFERRED_LEADER_NOT_AVAILABLE80TrueThe preferred leader was not available.
GROUP_MAX_SIZE_REACHED81FalseThe consumer group has reached its max size.
FENCED_INSTANCE_ID82FalseThe broker rejected this static consumer since another consumer with the same group.instance.id has registered with a different member.id.
ELIGIBLE_LEADERS_NOT_AVAILABLE83TrueEligible topic partition leaders are not available.
ELECTION_NOT_NEEDED84TrueLeader election not needed for topic partition.
NO_REASSIGNMENT_IN_PROGRESS85FalseNo partition reassignment is in progress.
GROUP_SUBSCRIBED_TO_TOPIC86FalseDeleting offsets of a topic is forbidden while the consumer group is actively subscribed to it.
INVALID_RECORD87FalseThis record has failed the validation on broker and hence will be rejected.
UNSTABLE_OFFSET_COMMIT88TrueThere are unstable offsets that need to be cleared.
THROTTLING_QUOTA_EXCEEDED89TrueThe throttling quota has been exceeded.
PRODUCER_FENCED90FalseThere is a newer producer with the same transactionalId which fences the current one.
RESOURCE_NOT_FOUND91FalseA request illegally referred to a resource that does not exist.
DUPLICATE_RESOURCE92FalseA request illegally referred to the same resource twice.
UNACCEPTABLE_CREDENTIAL93FalseRequested credential would not meet criteria for acceptability.
INCONSISTENT_VOTER_SET94FalseIndicates that the either the sender or recipient of a voter-only request is not one of the expected voters.
INVALID_UPDATE_VERSION95FalseThe given update version was invalid.
FEATURE_UPDATE_FAILED96FalseUnable to update finalized features due to an unexpected server error.
PRINCIPAL_DESERIALIZATION_FAILURE97FalseRequest principal deserialization failed during forwarding. This indicates an internal error on the broker cluster security setup.
SNAPSHOT_NOT_FOUND98FalseRequested snapshot was not found.
POSITION_OUT_OF_RANGE99FalseRequested position is not greater than or equal to zero, and less than the size of the snapshot.
UNKNOWN_TOPIC_ID100TrueThis server does not host this topic ID.
DUPLICATE_BROKER_REGISTRATION101FalseThis broker ID is already in use.
BROKER_ID_NOT_REGISTERED102FalseThe given broker ID was not registered.
INCONSISTENT_TOPIC_ID103TrueThe log's topic ID did not match the topic ID in the request.
INCONSISTENT_CLUSTER_ID104FalseThe clusterId in the request does not match that found on the server.
TRANSACTIONAL_ID_NOT_FOUND105FalseThe transactionalId could not be found.
FETCH_SESSION_TOPIC_ID_ERROR106TrueThe fetch session encountered inconsistent topic ID usage.
INELIGIBLE_REPLICA107FalseThe new ISR contains at least one ineligible replica.
NEW_LEADER_ELECTED108FalseThe AlterPartition request successfully updated the partition state but the leader has changed.
OFFSET_MOVED_TO_TIERED_STORAGE109FalseThe requested offset is moved to tiered storage.
FENCED_MEMBER_EPOCH110FalseThe member epoch is fenced by the group coordinator. The member must abandon all its partitions and rejoin.
UNRELEASED_INSTANCE_ID111FalseThe instance ID is still used by another member in the consumer group. That member must leave first.
UNSUPPORTED_ASSIGNOR112FalseThe assignor or its version range is not supported by the consumer group.
STALE_MEMBER_EPOCH113FalseThe member epoch is stale. The member must retry after receiving its updated member epoch via the ConsumerGroupHeartbeat API.
MISMATCHED_ENDPOINT_TYPE114FalseThe request was sent to an endpoint of the wrong type.
UNSUPPORTED_ENDPOINT_TYPE115FalseThis endpoint type is not supported yet.
UNKNOWN_CONTROLLER_ID116FalseThis controller ID is not known.
UNKNOWN_SUBSCRIPTION_ID117FalseClient sent a push telemetry request with an invalid or outdated subscription ID.
TELEMETRY_TOO_LARGE118FalseClient sent a push telemetry request larger than the maximum size the broker will accept.
INVALID_REGISTRATION119FalseThe controller has considered the broker registration to be invalid.
TRANSACTION_ABORTABLE120FalseThe server encountered an error with the transaction. The client can abort the transaction to continue using this transactional ID.
INVALID_RECORD_STATE121FalseThe record state is invalid. The acknowledgement of delivery could not be completed.
SHARE_SESSION_NOT_FOUND122TrueThe share session was not found.
INVALID_SHARE_SESSION_EPOCH123TrueThe share session epoch is invalid.
FENCED_STATE_EPOCH124FalseThe share coordinator rejected the request because the share-group state epoch did not match.
INVALID_VOTER_KEY125FalseThe voter key doesn't match the receiving replica's key.
DUPLICATE_VOTER126FalseThe voter is already part of the set of voters.
VOTER_NOT_FOUND127FalseThe voter is not part of the set of voters.
INVALID_REGULAR_EXPRESSION128FalseThe regular expression is not valid.
REBOOTSTRAP_REQUIRED129FalseClient metadata is stale, client should rebootstrap to obtain new metadata.

Api Keys

The following are the numeric codes that the stable ApiKey in the request can take for each of the below request types.

NameKey
Produce0
Fetch1
ListOffsets2
Metadata3
OffsetCommit8
OffsetFetch9
FindCoordinator10
JoinGroup11
Heartbeat12
LeaveGroup13
SyncGroup14
DescribeGroups15
ListGroups16
SaslHandshake17
ApiVersions18
CreateTopics19
DeleteTopics20
DeleteRecords21
InitProducerId22
OffsetForLeaderEpoch23
AddPartitionsToTxn24
AddOffsetsToTxn25
EndTxn26
WriteTxnMarkers27
TxnOffsetCommit28
DescribeAcls29
CreateAcls30
DeleteAcls31
DescribeConfigs32
AlterConfigs33
AlterReplicaLogDirs34
DescribeLogDirs35
SaslAuthenticate36
CreatePartitions37
CreateDelegationToken38
RenewDelegationToken39
ExpireDelegationToken40
DescribeDelegationToken41
DeleteGroups42
ElectLeaders43
IncrementalAlterConfigs44
AlterPartitionReassignments45
ListPartitionReassignments46
OffsetDelete47
DescribeClientQuotas48
AlterClientQuotas49
DescribeUserScramCredentials50
AlterUserScramCredentials51
DescribeQuorum55
UpdateFeatures57
DescribeCluster60
DescribeProducers61
UnregisterBroker64
DescribeTransactions65
ListTransactions66
ConsumerGroupHeartbeat68
ConsumerGroupDescribe69
GetTelemetrySubscriptions71
PushTelemetry72
ListClientMetricsResources74
DescribeTopicPartitions75
AddRaftVoter80
RemoveRaftVoter81

The Messages

This section gives details on each of the individual API Messages, their usage, their binary format, and the meaning of their fields.

The message consists of the header and body:

Message => RequestOrResponseHeader Body

RequestOrResponseHeader is the versioned request or response header. Body is the message-specific body.

Headers:
Request Header v1 => request_api_key request_api_version correlation_id client_id 
  request_api_key => INT16
  request_api_version => INT16
  correlation_id => INT32
  client_id => NULLABLE_STRING
FieldDescription
request_api_keyThe API key of this request.
request_api_versionThe API version of this request.
correlation_idThe correlation ID of this request.
client_idThe client ID string.
Request Header v2 => request_api_key request_api_version correlation_id client_id _tagged_fields 
  request_api_key => INT16
  request_api_version => INT16
  correlation_id => INT32
  client_id => NULLABLE_STRING
FieldDescription
request_api_keyThe API key of this request.
request_api_versionThe API version of this request.
correlation_idThe correlation ID of this request.
client_idThe client ID string.
_tagged_fieldsThe tagged fields
Response Header v0 => correlation_id 
  correlation_id => INT32
FieldDescription
correlation_idThe correlation ID of this response.
Response Header v1 => correlation_id _tagged_fields 
  correlation_id => INT32
FieldDescription
correlation_idThe correlation ID of this response.
_tagged_fieldsThe tagged fields
Produce API (Key: 0):
Requests:
Produce Request (Version: 3) => transactional_id acks timeout_ms [topic_data] 
  transactional_id => NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] 
    name => STRING
    partition_data => index records 
      index => INT32
      records => RECORDS

Request header version: 1

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
Produce Request (Version: 4) => transactional_id acks timeout_ms [topic_data] 
  transactional_id => NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] 
    name => STRING
    partition_data => index records 
      index => INT32
      records => RECORDS

Request header version: 1

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
Produce Request (Version: 5) => transactional_id acks timeout_ms [topic_data] 
  transactional_id => NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] 
    name => STRING
    partition_data => index records 
      index => INT32
      records => RECORDS

Request header version: 1

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
Produce Request (Version: 6) => transactional_id acks timeout_ms [topic_data] 
  transactional_id => NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] 
    name => STRING
    partition_data => index records 
      index => INT32
      records => RECORDS

Request header version: 1

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
Produce Request (Version: 7) => transactional_id acks timeout_ms [topic_data] 
  transactional_id => NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] 
    name => STRING
    partition_data => index records 
      index => INT32
      records => RECORDS

Request header version: 1

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
Produce Request (Version: 8) => transactional_id acks timeout_ms [topic_data] 
  transactional_id => NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] 
    name => STRING
    partition_data => index records 
      index => INT32
      records => RECORDS

Request header version: 1

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
Produce Request (Version: 9) => transactional_id acks timeout_ms [topic_data] _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] _tagged_fields 
    name => COMPACT_STRING
    partition_data => index records _tagged_fields 
      index => INT32
      records => COMPACT_RECORDS

Request header version: 2

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Produce Request (Version: 10) => transactional_id acks timeout_ms [topic_data] _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] _tagged_fields 
    name => COMPACT_STRING
    partition_data => index records _tagged_fields 
      index => INT32
      records => COMPACT_RECORDS

Request header version: 2

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Produce Request (Version: 11) => transactional_id acks timeout_ms [topic_data] _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] _tagged_fields 
    name => COMPACT_STRING
    partition_data => index records _tagged_fields 
      index => INT32
      records => COMPACT_RECORDS

Request header version: 2

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Produce Request (Version: 12) => transactional_id acks timeout_ms [topic_data] _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  acks => INT16
  timeout_ms => INT32
  topic_data => name [partition_data] _tagged_fields 
    name => COMPACT_STRING
    partition_data => index records _tagged_fields 
      index => INT32
      records => COMPACT_RECORDS

Request header version: 2

FieldDescription
transactional_idThe transactional ID, or null if the producer is not transactional.
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. Allowed values: 0 for no acknowledgments, 1 for only the leader and -1 for the full ISR.
timeout_msThe timeout to await a response in milliseconds.
topic_dataEach topic to produce to.
nameThe topic name.
partition_dataEach partition to produce to.
indexThe partition index.
recordsThe record data to be produced.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
Produce Response (Version: 3) => [responses] throttle_time_ms 
  responses => name [partition_responses] 
    name => STRING
    partition_responses => index error_code base_offset log_append_time_ms 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
Produce Response (Version: 4) => [responses] throttle_time_ms 
  responses => name [partition_responses] 
    name => STRING
    partition_responses => index error_code base_offset log_append_time_ms 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
Produce Response (Version: 5) => [responses] throttle_time_ms 
  responses => name [partition_responses] 
    name => STRING
    partition_responses => index error_code base_offset log_append_time_ms log_start_offset 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
      log_start_offset => INT64
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
log_start_offsetThe log start offset.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
Produce Response (Version: 6) => [responses] throttle_time_ms 
  responses => name [partition_responses] 
    name => STRING
    partition_responses => index error_code base_offset log_append_time_ms log_start_offset 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
      log_start_offset => INT64
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
log_start_offsetThe log start offset.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
Produce Response (Version: 7) => [responses] throttle_time_ms 
  responses => name [partition_responses] 
    name => STRING
    partition_responses => index error_code base_offset log_append_time_ms log_start_offset 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
      log_start_offset => INT64
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
log_start_offsetThe log start offset.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
Produce Response (Version: 8) => [responses] throttle_time_ms 
  responses => name [partition_responses] 
    name => STRING
    partition_responses => index error_code base_offset log_append_time_ms log_start_offset [record_errors] error_message 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
      log_start_offset => INT64
      record_errors => batch_index batch_index_error_message 
        batch_index => INT32
        batch_index_error_message => NULLABLE_STRING
      error_message => NULLABLE_STRING
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
log_start_offsetThe log start offset.
record_errorsThe batch indices of records that caused the batch to be dropped.
batch_indexThe batch index of the record that caused the batch to be dropped.
batch_index_error_messageThe error message of the record that caused the batch to be dropped.
error_messageThe global error message summarizing the common root cause of the records that caused the batch to be dropped.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
Produce Response (Version: 9) => [responses] throttle_time_ms _tagged_fields 
  responses => name [partition_responses] _tagged_fields 
    name => COMPACT_STRING
    partition_responses => index error_code base_offset log_append_time_ms log_start_offset [record_errors] error_message _tagged_fields 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
      log_start_offset => INT64
      record_errors => batch_index batch_index_error_message _tagged_fields 
        batch_index => INT32
        batch_index_error_message => COMPACT_NULLABLE_STRING
      error_message => COMPACT_NULLABLE_STRING
  throttle_time_ms => INT32

Response header version: 1

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
log_start_offsetThe log start offset.
record_errorsThe batch indices of records that caused the batch to be dropped.
batch_indexThe batch index of the record that caused the batch to be dropped.
batch_index_error_messageThe error message of the record that caused the batch to be dropped.
_tagged_fieldsThe tagged fields
error_messageThe global error message summarizing the common root cause of the records that caused the batch to be dropped.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
_tagged_fieldsThe tagged fields
Produce Response (Version: 10) => [responses] throttle_time_ms _tagged_fields 
  responses => name [partition_responses] _tagged_fields 
    name => COMPACT_STRING
    partition_responses => index error_code base_offset log_append_time_ms log_start_offset [record_errors] error_message _tagged_fields 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
      log_start_offset => INT64
      record_errors => batch_index batch_index_error_message _tagged_fields 
        batch_index => INT32
        batch_index_error_message => COMPACT_NULLABLE_STRING
      error_message => COMPACT_NULLABLE_STRING
  throttle_time_ms => INT32

Response header version: 1

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
log_start_offsetThe log start offset.
record_errorsThe batch indices of records that caused the batch to be dropped.
batch_indexThe batch index of the record that caused the batch to be dropped.
batch_index_error_messageThe error message of the record that caused the batch to be dropped.
_tagged_fieldsThe tagged fields
error_messageThe global error message summarizing the common root cause of the records that caused the batch to be dropped.
_tagged_fields
TagTagged fieldDescription
0current_leaderThe leader broker that the producer should use for future requests.
FieldDescription
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
_tagged_fields
TagTagged fieldDescription
0node_endpointsEndpoints for all current-leaders enumerated in PartitionProduceResponses, with errors NOT_LEADER_OR_FOLLOWER.
FieldDescription
node_idThe ID of the associated node.
hostThe node's hostname.
portThe node's port.
rackThe rack of the node, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
Produce Response (Version: 11) => [responses] throttle_time_ms _tagged_fields 
  responses => name [partition_responses] _tagged_fields 
    name => COMPACT_STRING
    partition_responses => index error_code base_offset log_append_time_ms log_start_offset [record_errors] error_message _tagged_fields 
      index => INT32
      error_code => INT16
      base_offset => INT64
      log_append_time_ms => INT64
      log_start_offset => INT64
      record_errors => batch_index batch_index_error_message _tagged_fields 
        batch_index => INT32
        batch_index_error_message => COMPACT_NULLABLE_STRING
      error_message => COMPACT_NULLABLE_STRING
  throttle_time_ms => INT32

Response header version: 1

FieldDescription
responsesEach produce response.
nameThe topic name.
partition_responsesEach partition that we produced to within the topic.
indexThe partition index.
error_codeThe error code, or 0 if there was no error.
base_offsetThe base offset.
log_append_time_msThe timestamp returned by broker after appending the messages. If CreateTime is used for the topic, the timestamp will be -1. If LogAppendTime is used for the topic, the timestamp will be the broker local time when the messages are appended.
log_start_offsetThe log start offset.
record_errorsThe batch indices of records that caused the batch to be dropped.
batch_indexThe batch index of the record that caused the batch to be dropped.
batch_index_error_messageThe error message of the record that caused the batch to be dropped.
_tagged_fieldsThe tagged fields
error_messageThe global error message summarizing the common root cause of the records that caused the batch to be dropped.
_tagged_fields
TagTagged fieldDescription
0current_leaderThe leader broker that the producer should use for future requests.
FieldDescription
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
_tagged_fields
TagTagged fieldDescription
0node_endpointsEndpoints for all current-leaders enumerated in PartitionProduceResponses, with errors NOT_LEADER_OR_FOLLOWER.
FieldDescription
node_idThe ID of the associated node.
hostThe node's hostname.
portThe node's port.
rackThe rack of the node, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
Fetch API (Key: 1):
Requests:
Fetch Request (Version: 4) => replica_id max_wait_ms min_bytes max_bytes isolation_level [topics] 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  topics => topic [partitions] 
    topic => STRING
    partitions => partition fetch_offset partition_max_bytes 
      partition => INT32
      fetch_offset => INT64
      partition_max_bytes => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
fetch_offsetThe message offset.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
Fetch Request (Version: 5) => replica_id max_wait_ms min_bytes max_bytes isolation_level [topics] 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  topics => topic [partitions] 
    topic => STRING
    partitions => partition fetch_offset log_start_offset partition_max_bytes 
      partition => INT32
      fetch_offset => INT64
      log_start_offset => INT64
      partition_max_bytes => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
fetch_offsetThe message offset.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
Fetch Request (Version: 6) => replica_id max_wait_ms min_bytes max_bytes isolation_level [topics] 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  topics => topic [partitions] 
    topic => STRING
    partitions => partition fetch_offset log_start_offset partition_max_bytes 
      partition => INT32
      fetch_offset => INT64
      log_start_offset => INT64
      partition_max_bytes => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
fetch_offsetThe message offset.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
Fetch Request (Version: 7) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => partition fetch_offset log_start_offset partition_max_bytes 
      partition => INT32
      fetch_offset => INT64
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic [partitions] 
    topic => STRING
    partitions => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
fetch_offsetThe message offset.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topicThe topic name.
partitionsThe partitions indexes to forget.
Fetch Request (Version: 8) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => partition fetch_offset log_start_offset partition_max_bytes 
      partition => INT32
      fetch_offset => INT64
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic [partitions] 
    topic => STRING
    partitions => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
fetch_offsetThe message offset.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topicThe topic name.
partitionsThe partitions indexes to forget.
Fetch Request (Version: 9) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => partition current_leader_epoch fetch_offset log_start_offset partition_max_bytes 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic [partitions] 
    topic => STRING
    partitions => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topicThe topic name.
partitionsThe partitions indexes to forget.
Fetch Request (Version: 10) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => partition current_leader_epoch fetch_offset log_start_offset partition_max_bytes 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic [partitions] 
    topic => STRING
    partitions => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topicThe topic name.
partitionsThe partitions indexes to forget.
Fetch Request (Version: 11) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => partition current_leader_epoch fetch_offset log_start_offset partition_max_bytes 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic [partitions] 
    topic => STRING
    partitions => INT32
  rack_id => STRING

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topicThe topic name.
partitionsThe partitions indexes to forget.
rack_idRack ID of the consumer making this request.
Fetch Request (Version: 12) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id _tagged_fields 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => partition current_leader_epoch fetch_offset last_fetched_epoch log_start_offset partition_max_bytes _tagged_fields 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      last_fetched_epoch => INT32
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => INT32
  rack_id => COMPACT_STRING

Request header version: 2

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topicThe name of the topic to fetch.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
last_fetched_epochThe epoch of the last fetched record or -1 if there is none.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topicThe topic name.
partitionsThe partitions indexes to forget.
_tagged_fieldsThe tagged fields
rack_idRack ID of the consumer making this request.
_tagged_fields
TagTagged fieldDescription
0cluster_idThe clusterId if known. This is used to validate metadata fetches prior to broker registration.
Fetch Request (Version: 13) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id _tagged_fields 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition current_leader_epoch fetch_offset last_fetched_epoch log_start_offset partition_max_bytes _tagged_fields 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      last_fetched_epoch => INT32
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32
  rack_id => COMPACT_STRING

Request header version: 2

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topic_idThe unique topic ID.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
last_fetched_epochThe epoch of the last fetched record or -1 if there is none.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topic_idThe unique topic ID.
partitionsThe partitions indexes to forget.
_tagged_fieldsThe tagged fields
rack_idRack ID of the consumer making this request.
_tagged_fields
TagTagged fieldDescription
0cluster_idThe clusterId if known. This is used to validate metadata fetches prior to broker registration.
Fetch Request (Version: 14) => replica_id max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id _tagged_fields 
  replica_id => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition current_leader_epoch fetch_offset last_fetched_epoch log_start_offset partition_max_bytes _tagged_fields 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      last_fetched_epoch => INT32
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32
  rack_id => COMPACT_STRING

Request header version: 2

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topic_idThe unique topic ID.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
last_fetched_epochThe epoch of the last fetched record or -1 if there is none.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topic_idThe unique topic ID.
partitionsThe partitions indexes to forget.
_tagged_fieldsThe tagged fields
rack_idRack ID of the consumer making this request.
_tagged_fields
TagTagged fieldDescription
0cluster_idThe clusterId if known. This is used to validate metadata fetches prior to broker registration.
Fetch Request (Version: 15) => max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id _tagged_fields 
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition current_leader_epoch fetch_offset last_fetched_epoch log_start_offset partition_max_bytes _tagged_fields 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      last_fetched_epoch => INT32
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32
  rack_id => COMPACT_STRING

Request header version: 2

FieldDescription
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topic_idThe unique topic ID.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
last_fetched_epochThe epoch of the last fetched record or -1 if there is none.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topic_idThe unique topic ID.
partitionsThe partitions indexes to forget.
_tagged_fieldsThe tagged fields
rack_idRack ID of the consumer making this request.
_tagged_fields
TagTagged fieldDescription
0cluster_idThe clusterId if known. This is used to validate metadata fetches prior to broker registration.
1replica_stateThe state of the replica in the follower.
FieldDescription
replica_idThe replica ID of the follower, or -1 if this request is from a consumer.
replica_epochThe epoch of this follower, or -1 if not available.
_tagged_fieldsThe tagged fields
Fetch Request (Version: 16) => max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id _tagged_fields 
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition current_leader_epoch fetch_offset last_fetched_epoch log_start_offset partition_max_bytes _tagged_fields 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      last_fetched_epoch => INT32
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32
  rack_id => COMPACT_STRING

Request header version: 2

FieldDescription
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topic_idThe unique topic ID.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
last_fetched_epochThe epoch of the last fetched record or -1 if there is none.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topic_idThe unique topic ID.
partitionsThe partitions indexes to forget.
_tagged_fieldsThe tagged fields
rack_idRack ID of the consumer making this request.
_tagged_fields
TagTagged fieldDescription
0cluster_idThe clusterId if known. This is used to validate metadata fetches prior to broker registration.
1replica_stateThe state of the replica in the follower.
FieldDescription
replica_idThe replica ID of the follower, or -1 if this request is from a consumer.
replica_epochThe epoch of this follower, or -1 if not available.
_tagged_fieldsThe tagged fields
Fetch Request (Version: 17) => max_wait_ms min_bytes max_bytes isolation_level session_id session_epoch [topics] [forgotten_topics_data] rack_id _tagged_fields 
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  isolation_level => INT8
  session_id => INT32
  session_epoch => INT32
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition current_leader_epoch fetch_offset last_fetched_epoch log_start_offset partition_max_bytes _tagged_fields 
      partition => INT32
      current_leader_epoch => INT32
      fetch_offset => INT64
      last_fetched_epoch => INT32
      log_start_offset => INT64
      partition_max_bytes => INT32
  forgotten_topics_data => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32
  rack_id => COMPACT_STRING

Request header version: 2

FieldDescription
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
session_idThe fetch session ID.
session_epochThe fetch session epoch, which is used for ordering requests in a session.
topicsThe topics to fetch.
topic_idThe unique topic ID.
partitionsThe partitions to fetch.
partitionThe partition index.
current_leader_epochThe current leader epoch of the partition.
fetch_offsetThe message offset.
last_fetched_epochThe epoch of the last fetched record or -1 if there is none.
log_start_offsetThe earliest available offset of the follower replica. The field is only used when the request is sent by the follower.
partition_max_bytesThe maximum bytes to fetch from this partition. See KIP-74 for cases where this limit may not be honored.
_tagged_fields
TagTagged fieldDescription
0replica_directory_idThe directory id of the follower fetching.
_tagged_fieldsThe tagged fields
forgotten_topics_dataIn an incremental fetch request, the partitions to remove.
topic_idThe unique topic ID.
partitionsThe partitions indexes to forget.
_tagged_fieldsThe tagged fields
rack_idRack ID of the consumer making this request.
_tagged_fields
TagTagged fieldDescription
0cluster_idThe clusterId if known. This is used to validate metadata fetches prior to broker registration.
1replica_stateThe state of the replica in the follower.
FieldDescription
replica_idThe replica ID of the follower, or -1 if this request is from a consumer.
replica_epochThe epoch of this follower, or -1 if not available.
_tagged_fieldsThe tagged fields
Responses:
Fetch Response (Version: 4) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset [aborted_transactions] records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
recordsThe record data.
Fetch Response (Version: 5) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
recordsThe record data.
Fetch Response (Version: 6) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
recordsThe record data.
Fetch Response (Version: 7) => throttle_time_ms error_code session_id [responses] 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
recordsThe record data.
Fetch Response (Version: 8) => throttle_time_ms error_code session_id [responses] 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
recordsThe record data.
Fetch Response (Version: 9) => throttle_time_ms error_code session_id [responses] 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
recordsThe record data.
Fetch Response (Version: 10) => throttle_time_ms error_code session_id [responses] 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
recordsThe record data.
Fetch Response (Version: 11) => throttle_time_ms error_code session_id [responses] 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic [partitions] 
    topic => STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] preferred_read_replica records 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset 
        producer_id => INT64
        first_offset => INT64
      preferred_read_replica => INT32
      records => RECORDS

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
preferred_read_replicaThe preferred read replica for the consumer to use on its next fetch request.
recordsThe record data.
Fetch Response (Version: 12) => throttle_time_ms error_code session_id [responses] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] preferred_read_replica records _tagged_fields 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset _tagged_fields 
        producer_id => INT64
        first_offset => INT64
      preferred_read_replica => INT32
      records => COMPACT_RECORDS

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topicThe topic name.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
_tagged_fieldsThe tagged fields
preferred_read_replicaThe preferred read replica for the consumer to use on its next fetch request.
recordsThe record data.
_tagged_fields
TagTagged fieldDescription
0diverging_epochIn case divergence is detected based on the `LastFetchedEpoch` and `FetchOffset` in the request, this field indicates the largest epoch and its end offset such that subsequent records are known to diverge.
FieldDescription
epochThe largest epoch.
end_offsetThe end offset of the epoch.
_tagged_fieldsThe tagged fields
1current_leaderThe current leader of the partition.
FieldDescription
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
_tagged_fieldsThe tagged fields
2snapshot_idIn the case of fetching an offset less than the LogStartOffset, this is the end offset and epoch that should be used in the FetchSnapshot request.
FieldDescription
end_offsetThe end offset of the epoch.
epochThe largest epoch.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Fetch Response (Version: 13) => throttle_time_ms error_code session_id [responses] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] preferred_read_replica records _tagged_fields 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset _tagged_fields 
        producer_id => INT64
        first_offset => INT64
      preferred_read_replica => INT32
      records => COMPACT_RECORDS

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topic_idThe unique topic ID.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
_tagged_fieldsThe tagged fields
preferred_read_replicaThe preferred read replica for the consumer to use on its next fetch request.
recordsThe record data.
_tagged_fields
TagTagged fieldDescription
0diverging_epochIn case divergence is detected based on the `LastFetchedEpoch` and `FetchOffset` in the request, this field indicates the largest epoch and its end offset such that subsequent records are known to diverge.
FieldDescription
epochThe largest epoch.
end_offsetThe end offset of the epoch.
_tagged_fieldsThe tagged fields
1current_leaderThe current leader of the partition.
FieldDescription
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
_tagged_fieldsThe tagged fields
2snapshot_idIn the case of fetching an offset less than the LogStartOffset, this is the end offset and epoch that should be used in the FetchSnapshot request.
FieldDescription
end_offsetThe end offset of the epoch.
epochThe largest epoch.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Fetch Response (Version: 14) => throttle_time_ms error_code session_id [responses] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] preferred_read_replica records _tagged_fields 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset _tagged_fields 
        producer_id => INT64
        first_offset => INT64
      preferred_read_replica => INT32
      records => COMPACT_RECORDS

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topic_idThe unique topic ID.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
_tagged_fieldsThe tagged fields
preferred_read_replicaThe preferred read replica for the consumer to use on its next fetch request.
recordsThe record data.
_tagged_fields
TagTagged fieldDescription
0diverging_epochIn case divergence is detected based on the `LastFetchedEpoch` and `FetchOffset` in the request, this field indicates the largest epoch and its end offset such that subsequent records are known to diverge.
FieldDescription
epochThe largest epoch.
end_offsetThe end offset of the epoch.
_tagged_fieldsThe tagged fields
1current_leaderThe current leader of the partition.
FieldDescription
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
_tagged_fieldsThe tagged fields
2snapshot_idIn the case of fetching an offset less than the LogStartOffset, this is the end offset and epoch that should be used in the FetchSnapshot request.
FieldDescription
end_offsetThe end offset of the epoch.
epochThe largest epoch.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Fetch Response (Version: 15) => throttle_time_ms error_code session_id [responses] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] preferred_read_replica records _tagged_fields 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset _tagged_fields 
        producer_id => INT64
        first_offset => INT64
      preferred_read_replica => INT32
      records => COMPACT_RECORDS

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topic_idThe unique topic ID.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
_tagged_fieldsThe tagged fields
preferred_read_replicaThe preferred read replica for the consumer to use on its next fetch request.
recordsThe record data.
_tagged_fields
TagTagged fieldDescription
0diverging_epochIn case divergence is detected based on the `LastFetchedEpoch` and `FetchOffset` in the request, this field indicates the largest epoch and its end offset such that subsequent records are known to diverge.
FieldDescription
epochThe largest epoch.
end_offsetThe end offset of the epoch.
_tagged_fieldsThe tagged fields
1current_leaderThe current leader of the partition.
FieldDescription
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
_tagged_fieldsThe tagged fields
2snapshot_idIn the case of fetching an offset less than the LogStartOffset, this is the end offset and epoch that should be used in the FetchSnapshot request.
FieldDescription
end_offsetThe end offset of the epoch.
epochThe largest epoch.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Fetch Response (Version: 16) => throttle_time_ms error_code session_id [responses] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  session_id => INT32
  responses => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition_index error_code high_watermark last_stable_offset log_start_offset [aborted_transactions] preferred_read_replica records _tagged_fields 
      partition_index => INT32
      error_code => INT16
      high_watermark => INT64
      last_stable_offset => INT64
      log_start_offset => INT64
      aborted_transactions => producer_id first_offset _tagged_fields 
        producer_id => INT64
        first_offset => INT64
      preferred_read_replica => INT32
      records => COMPACT_RECORDS

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
session_idThe fetch session ID, or 0 if this is not part of a fetch session.
responsesThe response topics.
topic_idThe unique topic ID.
partitionsThe topic partitions.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no fetch error.
high_watermarkThe current high water mark.
last_stable_offsetThe last stable offset (or LSO) of the partition. This is the last offset such that the state of all transactional records prior to this offset have been decided (ABORTED or COMMITTED).
log_start_offsetThe current log start offset.
aborted_transactionsThe aborted transactions.
producer_idThe producer id associated with the aborted transaction.
first_offsetThe first offset in the aborted transaction.
_tagged_fieldsThe tagged fields
preferred_read_replicaThe preferred read replica for the consumer to use on its next fetch request.
recordsThe record data.
_tagged_fields
TagTagged fieldDescription
0diverging_epochIn case divergence is detected based on the `LastFetchedEpoch` and `FetchOffset` in the request, this field indicates the largest epoch and its end offset such that subsequent records are known to diverge.
FieldDescription
epochThe largest epoch.
end_offsetThe end offset of the epoch.
_tagged_fieldsThe tagged fields
1current_leaderThe current leader of the partition.
FieldDescription
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
_tagged_fieldsThe tagged fields
2snapshot_idIn the case of fetching an offset less than the LogStartOffset, this is the end offset and epoch that should be used in the FetchSnapshot request.
FieldDescription
end_offsetThe end offset of the epoch.
epochThe largest epoch.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fields
TagTagged fieldDescription
0node_endpointsEndpoints for all current-leaders enumerated in PartitionData, with errors NOT_LEADER_OR_FOLLOWER & FENCED_LEADER_EPOCH.
FieldDescription
node_idThe ID of the associated node.
hostThe node's hostname.
portThe node's port.
rackThe rack of the node, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
ListOffsets API (Key: 2):
Requests:
ListOffsets Request (Version: 1) => replica_id [topics] 
  replica_id => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index timestamp 
      partition_index => INT32
      timestamp => INT64

Request header version: 1

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
timestampThe current timestamp.
ListOffsets Request (Version: 2) => replica_id isolation_level [topics] 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] 
    name => STRING
    partitions => partition_index timestamp 
      partition_index => INT32
      timestamp => INT64

Request header version: 1

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
timestampThe current timestamp.
ListOffsets Request (Version: 3) => replica_id isolation_level [topics] 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] 
    name => STRING
    partitions => partition_index timestamp 
      partition_index => INT32
      timestamp => INT64

Request header version: 1

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
timestampThe current timestamp.
ListOffsets Request (Version: 4) => replica_id isolation_level [topics] 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] 
    name => STRING
    partitions => partition_index current_leader_epoch timestamp 
      partition_index => INT32
      current_leader_epoch => INT32
      timestamp => INT64

Request header version: 1

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
current_leader_epochThe current leader epoch.
timestampThe current timestamp.
ListOffsets Request (Version: 5) => replica_id isolation_level [topics] 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] 
    name => STRING
    partitions => partition_index current_leader_epoch timestamp 
      partition_index => INT32
      current_leader_epoch => INT32
      timestamp => INT64

Request header version: 1

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
current_leader_epochThe current leader epoch.
timestampThe current timestamp.
ListOffsets Request (Version: 6) => replica_id isolation_level [topics] _tagged_fields 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index current_leader_epoch timestamp _tagged_fields 
      partition_index => INT32
      current_leader_epoch => INT32
      timestamp => INT64

Request header version: 2

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
current_leader_epochThe current leader epoch.
timestampThe current timestamp.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListOffsets Request (Version: 7) => replica_id isolation_level [topics] _tagged_fields 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index current_leader_epoch timestamp _tagged_fields 
      partition_index => INT32
      current_leader_epoch => INT32
      timestamp => INT64

Request header version: 2

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
current_leader_epochThe current leader epoch.
timestampThe current timestamp.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListOffsets Request (Version: 8) => replica_id isolation_level [topics] _tagged_fields 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index current_leader_epoch timestamp _tagged_fields 
      partition_index => INT32
      current_leader_epoch => INT32
      timestamp => INT64

Request header version: 2

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
current_leader_epochThe current leader epoch.
timestampThe current timestamp.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListOffsets Request (Version: 9) => replica_id isolation_level [topics] _tagged_fields 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index current_leader_epoch timestamp _tagged_fields 
      partition_index => INT32
      current_leader_epoch => INT32
      timestamp => INT64

Request header version: 2

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
current_leader_epochThe current leader epoch.
timestampThe current timestamp.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListOffsets Request (Version: 10) => replica_id isolation_level [topics] timeout_ms _tagged_fields 
  replica_id => INT32
  isolation_level => INT8
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index current_leader_epoch timestamp _tagged_fields 
      partition_index => INT32
      current_leader_epoch => INT32
      timestamp => INT64
  timeout_ms => INT32

Request header version: 2

FieldDescription
replica_idThe broker ID of the requester, or -1 if this request is being made by a normal consumer.
isolation_levelThis setting controls the visibility of transactional records. Using READ_UNCOMMITTED (isolation_level = 0) makes all records visible. With READ_COMMITTED (isolation_level = 1), non-transactional and COMMITTED transactional records are visible. To be more concrete, READ_COMMITTED returns all data from offsets smaller than the current LSO (last stable offset), and enables the inclusion of the list of aborted transactions in the result, which allows consumers to discard ABORTED transactional records.
topicsEach topic in the request.
nameThe topic name.
partitionsEach partition in the request.
partition_indexThe partition index.
current_leader_epochThe current leader epoch.
timestampThe current timestamp.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
timeout_msThe timeout to await a response in milliseconds for requests that require reading from remote storage for topics enabled with tiered storage.
_tagged_fieldsThe tagged fields
Responses:
ListOffsets Response (Version: 1) => [topics] 
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code timestamp offset 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64

Response header version: 0

FieldDescription
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
ListOffsets Response (Version: 2) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code timestamp offset 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
ListOffsets Response (Version: 3) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code timestamp offset 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
ListOffsets Response (Version: 4) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code timestamp offset leader_epoch 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64
      leader_epoch => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
leader_epochThe leader epoch associated with the returned offset.
ListOffsets Response (Version: 5) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code timestamp offset leader_epoch 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64
      leader_epoch => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
leader_epochThe leader epoch associated with the returned offset.
ListOffsets Response (Version: 6) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index error_code timestamp offset leader_epoch _tagged_fields 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64
      leader_epoch => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
leader_epochThe leader epoch associated with the returned offset.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListOffsets Response (Version: 7) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index error_code timestamp offset leader_epoch _tagged_fields 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64
      leader_epoch => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
leader_epochThe leader epoch associated with the returned offset.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListOffsets Response (Version: 8) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index error_code timestamp offset leader_epoch _tagged_fields 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64
      leader_epoch => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
leader_epochThe leader epoch associated with the returned offset.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListOffsets Response (Version: 9) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index error_code timestamp offset leader_epoch _tagged_fields 
      partition_index => INT32
      error_code => INT16
      timestamp => INT64
      offset => INT64
      leader_epoch => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic in the response.
nameThe topic name.
partitionsEach partition in the response.
partition_indexThe partition index.
error_codeThe partition error code, or 0 if there was no error.
timestampThe timestamp associated with the returned offset.
offsetThe returned offset.
leader_epochThe leader epoch associated with the returned offset.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Metadata API (Key: 3):
Requests:
Metadata Request (Version: 0) => [topics] 
  topics => name 
    name => STRING

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
Metadata Request (Version: 1) => [topics] 
  topics => name 
    name => STRING

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
Metadata Request (Version: 2) => [topics] 
  topics => name 
    name => STRING

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
Metadata Request (Version: 3) => [topics] 
  topics => name 
    name => STRING

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
Metadata Request (Version: 4) => [topics] allow_auto_topic_creation 
  topics => name 
    name => STRING
  allow_auto_topic_creation => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
Metadata Request (Version: 5) => [topics] allow_auto_topic_creation 
  topics => name 
    name => STRING
  allow_auto_topic_creation => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
Metadata Request (Version: 6) => [topics] allow_auto_topic_creation 
  topics => name 
    name => STRING
  allow_auto_topic_creation => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
Metadata Request (Version: 7) => [topics] allow_auto_topic_creation 
  topics => name 
    name => STRING
  allow_auto_topic_creation => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
Metadata Request (Version: 8) => [topics] allow_auto_topic_creation include_cluster_authorized_operations include_topic_authorized_operations 
  topics => name 
    name => STRING
  allow_auto_topic_creation => BOOLEAN
  include_cluster_authorized_operations => BOOLEAN
  include_topic_authorized_operations => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
include_cluster_authorized_operationsWhether to include cluster authorized operations.
include_topic_authorized_operationsWhether to include topic authorized operations.
Metadata Request (Version: 9) => [topics] allow_auto_topic_creation include_cluster_authorized_operations include_topic_authorized_operations _tagged_fields 
  topics => name _tagged_fields 
    name => COMPACT_STRING
  allow_auto_topic_creation => BOOLEAN
  include_cluster_authorized_operations => BOOLEAN
  include_topic_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to fetch metadata for.
nameThe topic name.
_tagged_fieldsThe tagged fields
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
include_cluster_authorized_operationsWhether to include cluster authorized operations.
include_topic_authorized_operationsWhether to include topic authorized operations.
_tagged_fieldsThe tagged fields
Metadata Request (Version: 10) => [topics] allow_auto_topic_creation include_cluster_authorized_operations include_topic_authorized_operations _tagged_fields 
  topics => topic_id name _tagged_fields 
    topic_id => UUID
    name => COMPACT_NULLABLE_STRING
  allow_auto_topic_creation => BOOLEAN
  include_cluster_authorized_operations => BOOLEAN
  include_topic_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to fetch metadata for.
topic_idThe topic id.
nameThe topic name.
_tagged_fieldsThe tagged fields
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
include_cluster_authorized_operationsWhether to include cluster authorized operations.
include_topic_authorized_operationsWhether to include topic authorized operations.
_tagged_fieldsThe tagged fields
Metadata Request (Version: 11) => [topics] allow_auto_topic_creation include_topic_authorized_operations _tagged_fields 
  topics => topic_id name _tagged_fields 
    topic_id => UUID
    name => COMPACT_NULLABLE_STRING
  allow_auto_topic_creation => BOOLEAN
  include_topic_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to fetch metadata for.
topic_idThe topic id.
nameThe topic name.
_tagged_fieldsThe tagged fields
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
include_topic_authorized_operationsWhether to include topic authorized operations.
_tagged_fieldsThe tagged fields
Metadata Request (Version: 12) => [topics] allow_auto_topic_creation include_topic_authorized_operations _tagged_fields 
  topics => topic_id name _tagged_fields 
    topic_id => UUID
    name => COMPACT_NULLABLE_STRING
  allow_auto_topic_creation => BOOLEAN
  include_topic_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to fetch metadata for.
topic_idThe topic id.
nameThe topic name.
_tagged_fieldsThe tagged fields
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
include_topic_authorized_operationsWhether to include topic authorized operations.
_tagged_fieldsThe tagged fields
Metadata Request (Version: 13) => [topics] allow_auto_topic_creation include_topic_authorized_operations _tagged_fields 
  topics => topic_id name _tagged_fields 
    topic_id => UUID
    name => COMPACT_NULLABLE_STRING
  allow_auto_topic_creation => BOOLEAN
  include_topic_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to fetch metadata for.
topic_idThe topic id.
nameThe topic name.
_tagged_fieldsThe tagged fields
allow_auto_topic_creationIf this is true, the broker may auto-create topics that we requested which do not already exist, if it is configured to do so.
include_topic_authorized_operationsWhether to include topic authorized operations.
_tagged_fieldsThe tagged fields
Responses:
Metadata Response (Version: 0) => [brokers] [topics] 
  brokers => node_id host port 
    node_id => INT32
    host => STRING
    port => INT32
  topics => error_code name [partitions] 
    error_code => INT16
    name => STRING
    partitions => error_code partition_index leader_id [replica_nodes] [isr_nodes] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      replica_nodes => INT32
      isr_nodes => INT32

Response header version: 0

FieldDescription
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
Metadata Response (Version: 1) => [brokers] controller_id [topics] 
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id [replica_nodes] [isr_nodes] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      replica_nodes => INT32
      isr_nodes => INT32

Response header version: 0

FieldDescription
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
Metadata Response (Version: 2) => [brokers] cluster_id controller_id [topics] 
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  cluster_id => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id [replica_nodes] [isr_nodes] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      replica_nodes => INT32
      isr_nodes => INT32

Response header version: 0

FieldDescription
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
Metadata Response (Version: 3) => throttle_time_ms [brokers] cluster_id controller_id [topics] 
  throttle_time_ms => INT32
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  cluster_id => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id [replica_nodes] [isr_nodes] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      replica_nodes => INT32
      isr_nodes => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
Metadata Response (Version: 4) => throttle_time_ms [brokers] cluster_id controller_id [topics] 
  throttle_time_ms => INT32
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  cluster_id => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id [replica_nodes] [isr_nodes] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      replica_nodes => INT32
      isr_nodes => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
Metadata Response (Version: 5) => throttle_time_ms [brokers] cluster_id controller_id [topics] 
  throttle_time_ms => INT32
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  cluster_id => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id [replica_nodes] [isr_nodes] [offline_replicas] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
Metadata Response (Version: 6) => throttle_time_ms [brokers] cluster_id controller_id [topics] 
  throttle_time_ms => INT32
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  cluster_id => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id [replica_nodes] [isr_nodes] [offline_replicas] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
Metadata Response (Version: 7) => throttle_time_ms [brokers] cluster_id controller_id [topics] 
  throttle_time_ms => INT32
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  cluster_id => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id leader_epoch [replica_nodes] [isr_nodes] [offline_replicas] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      leader_epoch => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
leader_epochThe leader epoch of this partition.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
Metadata Response (Version: 8) => throttle_time_ms [brokers] cluster_id controller_id [topics] cluster_authorized_operations 
  throttle_time_ms => INT32
  brokers => node_id host port rack 
    node_id => INT32
    host => STRING
    port => INT32
    rack => NULLABLE_STRING
  cluster_id => NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] topic_authorized_operations 
    error_code => INT16
    name => STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id leader_epoch [replica_nodes] [isr_nodes] [offline_replicas] 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      leader_epoch => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32
    topic_authorized_operations => INT32
  cluster_authorized_operations => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
leader_epochThe leader epoch of this partition.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
topic_authorized_operations32-bit bitfield to represent authorized operations for this topic.
cluster_authorized_operations32-bit bitfield to represent authorized operations for this cluster.
Metadata Response (Version: 9) => throttle_time_ms [brokers] cluster_id controller_id [topics] cluster_authorized_operations _tagged_fields 
  throttle_time_ms => INT32
  brokers => node_id host port rack _tagged_fields 
    node_id => INT32
    host => COMPACT_STRING
    port => INT32
    rack => COMPACT_NULLABLE_STRING
  cluster_id => COMPACT_NULLABLE_STRING
  controller_id => INT32
  topics => error_code name is_internal [partitions] topic_authorized_operations _tagged_fields 
    error_code => INT16
    name => COMPACT_STRING
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id leader_epoch [replica_nodes] [isr_nodes] [offline_replicas] _tagged_fields 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      leader_epoch => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32
    topic_authorized_operations => INT32
  cluster_authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
leader_epochThe leader epoch of this partition.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
_tagged_fieldsThe tagged fields
topic_authorized_operations32-bit bitfield to represent authorized operations for this topic.
_tagged_fieldsThe tagged fields
cluster_authorized_operations32-bit bitfield to represent authorized operations for this cluster.
_tagged_fieldsThe tagged fields
Metadata Response (Version: 10) => throttle_time_ms [brokers] cluster_id controller_id [topics] cluster_authorized_operations _tagged_fields 
  throttle_time_ms => INT32
  brokers => node_id host port rack _tagged_fields 
    node_id => INT32
    host => COMPACT_STRING
    port => INT32
    rack => COMPACT_NULLABLE_STRING
  cluster_id => COMPACT_NULLABLE_STRING
  controller_id => INT32
  topics => error_code name topic_id is_internal [partitions] topic_authorized_operations _tagged_fields 
    error_code => INT16
    name => COMPACT_STRING
    topic_id => UUID
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id leader_epoch [replica_nodes] [isr_nodes] [offline_replicas] _tagged_fields 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      leader_epoch => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32
    topic_authorized_operations => INT32
  cluster_authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
topic_idThe topic id. Zero for non-existing topics queried by name. This is never zero when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
leader_epochThe leader epoch of this partition.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
_tagged_fieldsThe tagged fields
topic_authorized_operations32-bit bitfield to represent authorized operations for this topic.
_tagged_fieldsThe tagged fields
cluster_authorized_operations32-bit bitfield to represent authorized operations for this cluster.
_tagged_fieldsThe tagged fields
Metadata Response (Version: 11) => throttle_time_ms [brokers] cluster_id controller_id [topics] _tagged_fields 
  throttle_time_ms => INT32
  brokers => node_id host port rack _tagged_fields 
    node_id => INT32
    host => COMPACT_STRING
    port => INT32
    rack => COMPACT_NULLABLE_STRING
  cluster_id => COMPACT_NULLABLE_STRING
  controller_id => INT32
  topics => error_code name topic_id is_internal [partitions] topic_authorized_operations _tagged_fields 
    error_code => INT16
    name => COMPACT_STRING
    topic_id => UUID
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id leader_epoch [replica_nodes] [isr_nodes] [offline_replicas] _tagged_fields 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      leader_epoch => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32
    topic_authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
topic_idThe topic id. Zero for non-existing topics queried by name. This is never zero when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
leader_epochThe leader epoch of this partition.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
_tagged_fieldsThe tagged fields
topic_authorized_operations32-bit bitfield to represent authorized operations for this topic.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Metadata Response (Version: 12) => throttle_time_ms [brokers] cluster_id controller_id [topics] _tagged_fields 
  throttle_time_ms => INT32
  brokers => node_id host port rack _tagged_fields 
    node_id => INT32
    host => COMPACT_STRING
    port => INT32
    rack => COMPACT_NULLABLE_STRING
  cluster_id => COMPACT_NULLABLE_STRING
  controller_id => INT32
  topics => error_code name topic_id is_internal [partitions] topic_authorized_operations _tagged_fields 
    error_code => INT16
    name => COMPACT_NULLABLE_STRING
    topic_id => UUID
    is_internal => BOOLEAN
    partitions => error_code partition_index leader_id leader_epoch [replica_nodes] [isr_nodes] [offline_replicas] _tagged_fields 
      error_code => INT16
      partition_index => INT32
      leader_id => INT32
      leader_epoch => INT32
      replica_nodes => INT32
      isr_nodes => INT32
      offline_replicas => INT32
    topic_authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
brokersA list of brokers present in the cluster.
node_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
topicsEach topic in the response.
error_codeThe topic error, or 0 if there was no error.
nameThe topic name. Null for non-existing topics queried by ID. This is never null when ErrorCode is zero. One of Name and TopicId is always populated.
topic_idThe topic id. Zero for non-existing topics queried by name. This is never zero when ErrorCode is zero. One of Name and TopicId is always populated.
is_internalTrue if the topic is internal.
partitionsEach partition in the topic.
error_codeThe partition error, or 0 if there was no error.
partition_indexThe partition index.
leader_idThe ID of the leader broker.
leader_epochThe leader epoch of this partition.
replica_nodesThe set of all nodes that host this partition.
isr_nodesThe set of nodes that are in sync with the leader for this partition.
offline_replicasThe set of offline replicas of this partition.
_tagged_fieldsThe tagged fields
topic_authorized_operations32-bit bitfield to represent authorized operations for this topic.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
OffsetCommit API (Key: 8):
Requests:
OffsetCommit Request (Version: 2) => group_id generation_id_or_member_epoch member_id retention_time_ms [topics] 
  group_id => STRING
  generation_id_or_member_epoch => INT32
  member_id => STRING
  retention_time_ms => INT64
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
retention_time_msThe time period in ms to retain the offset.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_metadataAny associated metadata the client wants to keep.
OffsetCommit Request (Version: 3) => group_id generation_id_or_member_epoch member_id retention_time_ms [topics] 
  group_id => STRING
  generation_id_or_member_epoch => INT32
  member_id => STRING
  retention_time_ms => INT64
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
retention_time_msThe time period in ms to retain the offset.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_metadataAny associated metadata the client wants to keep.
OffsetCommit Request (Version: 4) => group_id generation_id_or_member_epoch member_id retention_time_ms [topics] 
  group_id => STRING
  generation_id_or_member_epoch => INT32
  member_id => STRING
  retention_time_ms => INT64
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
retention_time_msThe time period in ms to retain the offset.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_metadataAny associated metadata the client wants to keep.
OffsetCommit Request (Version: 5) => group_id generation_id_or_member_epoch member_id [topics] 
  group_id => STRING
  generation_id_or_member_epoch => INT32
  member_id => STRING
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_metadataAny associated metadata the client wants to keep.
OffsetCommit Request (Version: 6) => group_id generation_id_or_member_epoch member_id [topics] 
  group_id => STRING
  generation_id_or_member_epoch => INT32
  member_id => STRING
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of this partition.
committed_metadataAny associated metadata the client wants to keep.
OffsetCommit Request (Version: 7) => group_id generation_id_or_member_epoch member_id group_instance_id [topics] 
  group_id => STRING
  generation_id_or_member_epoch => INT32
  member_id => STRING
  group_instance_id => NULLABLE_STRING
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of this partition.
committed_metadataAny associated metadata the client wants to keep.
OffsetCommit Request (Version: 8) => group_id generation_id_or_member_epoch member_id group_instance_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  generation_id_or_member_epoch => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata _tagged_fields 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of this partition.
committed_metadataAny associated metadata the client wants to keep.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
OffsetCommit Request (Version: 9) => group_id generation_id_or_member_epoch member_id group_instance_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  generation_id_or_member_epoch => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata _tagged_fields 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
group_idThe unique group identifier.
generation_id_or_member_epochThe generation of the group if using the classic group protocol or the member epoch if using the consumer protocol.
member_idThe member ID assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
topicsThe topics to commit offsets for.
nameThe topic name.
partitionsEach partition to commit offsets for.
partition_indexThe partition index.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of this partition.
committed_metadataAny associated metadata the client wants to keep.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
OffsetCommit Response (Version: 2) => [topics] 
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
OffsetCommit Response (Version: 3) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
OffsetCommit Response (Version: 4) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
OffsetCommit Response (Version: 5) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
OffsetCommit Response (Version: 6) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
OffsetCommit Response (Version: 7) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
OffsetCommit Response (Version: 8) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index error_code _tagged_fields 
      partition_index => INT32
      error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
OffsetFetch API (Key: 9):
Requests:
OffsetFetch Request (Version: 1) => group_id [topics] 
  group_id => STRING
  topics => name [partition_indexes] 
    name => STRING
    partition_indexes => INT32

Request header version: 1

FieldDescription
group_idThe group to fetch offsets for.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
OffsetFetch Request (Version: 2) => group_id [topics] 
  group_id => STRING
  topics => name [partition_indexes] 
    name => STRING
    partition_indexes => INT32

Request header version: 1

FieldDescription
group_idThe group to fetch offsets for.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
OffsetFetch Request (Version: 3) => group_id [topics] 
  group_id => STRING
  topics => name [partition_indexes] 
    name => STRING
    partition_indexes => INT32

Request header version: 1

FieldDescription
group_idThe group to fetch offsets for.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
OffsetFetch Request (Version: 4) => group_id [topics] 
  group_id => STRING
  topics => name [partition_indexes] 
    name => STRING
    partition_indexes => INT32

Request header version: 1

FieldDescription
group_idThe group to fetch offsets for.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
OffsetFetch Request (Version: 5) => group_id [topics] 
  group_id => STRING
  topics => name [partition_indexes] 
    name => STRING
    partition_indexes => INT32

Request header version: 1

FieldDescription
group_idThe group to fetch offsets for.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
OffsetFetch Request (Version: 6) => group_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  topics => name [partition_indexes] _tagged_fields 
    name => COMPACT_STRING
    partition_indexes => INT32

Request header version: 2

FieldDescription
group_idThe group to fetch offsets for.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
OffsetFetch Request (Version: 7) => group_id [topics] require_stable _tagged_fields 
  group_id => COMPACT_STRING
  topics => name [partition_indexes] _tagged_fields 
    name => COMPACT_STRING
    partition_indexes => INT32
  require_stable => BOOLEAN

Request header version: 2

FieldDescription
group_idThe group to fetch offsets for.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
_tagged_fieldsThe tagged fields
require_stableWhether broker should hold on returning unstable offsets but set a retriable error code for the partitions.
_tagged_fieldsThe tagged fields
OffsetFetch Request (Version: 8) => [groups] require_stable _tagged_fields 
  groups => group_id [topics] _tagged_fields 
    group_id => COMPACT_STRING
    topics => name [partition_indexes] _tagged_fields 
      name => COMPACT_STRING
      partition_indexes => INT32
  require_stable => BOOLEAN

Request header version: 2

FieldDescription
groupsEach group we would like to fetch offsets for.
group_idThe group ID.
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
require_stableWhether broker should hold on returning unstable offsets but set a retriable error code for the partitions.
_tagged_fieldsThe tagged fields
OffsetFetch Request (Version: 9) => [groups] require_stable _tagged_fields 
  groups => group_id member_id member_epoch [topics] _tagged_fields 
    group_id => COMPACT_STRING
    member_id => COMPACT_NULLABLE_STRING
    member_epoch => INT32
    topics => name [partition_indexes] _tagged_fields 
      name => COMPACT_STRING
      partition_indexes => INT32
  require_stable => BOOLEAN

Request header version: 2

FieldDescription
groupsEach group we would like to fetch offsets for.
group_idThe group ID.
member_idThe member id.
member_epochThe member epoch if using the new consumer protocol (KIP-848).
topicsEach topic we would like to fetch offsets for, or null to fetch offsets for all topics.
nameThe topic name.
partition_indexesThe partition indexes we would like to fetch offsets for.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
require_stableWhether broker should hold on returning unstable offsets but set a retriable error code for the partitions.
_tagged_fieldsThe tagged fields
Responses:
OffsetFetch Response (Version: 1) => [topics] 
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset metadata error_code 
      partition_index => INT32
      committed_offset => INT64
      metadata => NULLABLE_STRING
      error_code => INT16

Response header version: 0

FieldDescription
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
metadataThe partition metadata.
error_codeThe error code, or 0 if there was no error.
OffsetFetch Response (Version: 2) => [topics] error_code 
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset metadata error_code 
      partition_index => INT32
      committed_offset => INT64
      metadata => NULLABLE_STRING
      error_code => INT16
  error_code => INT16

Response header version: 0

FieldDescription
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
metadataThe partition metadata.
error_codeThe error code, or 0 if there was no error.
error_codeThe top-level error code, or 0 if there was no error.
OffsetFetch Response (Version: 3) => throttle_time_ms [topics] error_code 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset metadata error_code 
      partition_index => INT32
      committed_offset => INT64
      metadata => NULLABLE_STRING
      error_code => INT16
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
metadataThe partition metadata.
error_codeThe error code, or 0 if there was no error.
error_codeThe top-level error code, or 0 if there was no error.
OffsetFetch Response (Version: 4) => throttle_time_ms [topics] error_code 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset metadata error_code 
      partition_index => INT32
      committed_offset => INT64
      metadata => NULLABLE_STRING
      error_code => INT16
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
metadataThe partition metadata.
error_codeThe error code, or 0 if there was no error.
error_codeThe top-level error code, or 0 if there was no error.
OffsetFetch Response (Version: 5) => throttle_time_ms [topics] error_code 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_leader_epoch metadata error_code 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      metadata => NULLABLE_STRING
      error_code => INT16
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
committed_leader_epochThe leader epoch.
metadataThe partition metadata.
error_codeThe error code, or 0 if there was no error.
error_codeThe top-level error code, or 0 if there was no error.
OffsetFetch Response (Version: 6) => throttle_time_ms [topics] error_code _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index committed_offset committed_leader_epoch metadata error_code _tagged_fields 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      metadata => COMPACT_NULLABLE_STRING
      error_code => INT16
  error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
committed_leader_epochThe leader epoch.
metadataThe partition metadata.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
error_codeThe top-level error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
OffsetFetch Response (Version: 7) => throttle_time_ms [topics] error_code _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index committed_offset committed_leader_epoch metadata error_code _tagged_fields 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      metadata => COMPACT_NULLABLE_STRING
      error_code => INT16
  error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
committed_leader_epochThe leader epoch.
metadataThe partition metadata.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
error_codeThe top-level error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
OffsetFetch Response (Version: 8) => throttle_time_ms [groups] _tagged_fields 
  throttle_time_ms => INT32
  groups => group_id [topics] error_code _tagged_fields 
    group_id => COMPACT_STRING
    topics => name [partitions] _tagged_fields 
      name => COMPACT_STRING
      partitions => partition_index committed_offset committed_leader_epoch metadata error_code _tagged_fields 
        partition_index => INT32
        committed_offset => INT64
        committed_leader_epoch => INT32
        metadata => COMPACT_NULLABLE_STRING
        error_code => INT16
    error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
groupsThe responses per group id.
group_idThe group ID.
topicsThe responses per topic.
nameThe topic name.
partitionsThe responses per partition.
partition_indexThe partition index.
committed_offsetThe committed message offset.
committed_leader_epochThe leader epoch.
metadataThe partition metadata.
error_codeThe partition-level error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
error_codeThe group-level error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
FindCoordinator API (Key: 10):
Requests:
FindCoordinator Request (Version: 0) => key 
  key => STRING

Request header version: 1

FieldDescription
keyThe coordinator key.
FindCoordinator Request (Version: 1) => key key_type 
  key => STRING
  key_type => INT8

Request header version: 1

FieldDescription
keyThe coordinator key.
key_typeThe coordinator key type. (group, transaction, share).
FindCoordinator Request (Version: 2) => key key_type 
  key => STRING
  key_type => INT8

Request header version: 1

FieldDescription
keyThe coordinator key.
key_typeThe coordinator key type. (group, transaction, share).
FindCoordinator Request (Version: 3) => key key_type _tagged_fields 
  key => COMPACT_STRING
  key_type => INT8

Request header version: 2

FieldDescription
keyThe coordinator key.
key_typeThe coordinator key type. (group, transaction, share).
_tagged_fieldsThe tagged fields
FindCoordinator Request (Version: 4) => key_type [coordinator_keys] _tagged_fields 
  key_type => INT8
  coordinator_keys => COMPACT_STRING

Request header version: 2

FieldDescription
key_typeThe coordinator key type. (group, transaction, share).
coordinator_keysThe coordinator keys.
_tagged_fieldsThe tagged fields
FindCoordinator Request (Version: 5) => key_type [coordinator_keys] _tagged_fields 
  key_type => INT8
  coordinator_keys => COMPACT_STRING

Request header version: 2

FieldDescription
key_typeThe coordinator key type. (group, transaction, share).
coordinator_keysThe coordinator keys.
_tagged_fieldsThe tagged fields
FindCoordinator Request (Version: 6) => key_type [coordinator_keys] _tagged_fields 
  key_type => INT8
  coordinator_keys => COMPACT_STRING

Request header version: 2

FieldDescription
key_typeThe coordinator key type. (group, transaction, share).
coordinator_keysThe coordinator keys.
_tagged_fieldsThe tagged fields
Responses:
FindCoordinator Response (Version: 0) => error_code node_id host port 
  error_code => INT16
  node_id => INT32
  host => STRING
  port => INT32

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
node_idThe node id.
hostThe host name.
portThe port.
FindCoordinator Response (Version: 1) => throttle_time_ms error_code error_message node_id host port 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => NULLABLE_STRING
  node_id => INT32
  host => STRING
  port => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
node_idThe node id.
hostThe host name.
portThe port.
FindCoordinator Response (Version: 2) => throttle_time_ms error_code error_message node_id host port 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => NULLABLE_STRING
  node_id => INT32
  host => STRING
  port => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
node_idThe node id.
hostThe host name.
portThe port.
FindCoordinator Response (Version: 3) => throttle_time_ms error_code error_message node_id host port _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => COMPACT_NULLABLE_STRING
  node_id => INT32
  host => COMPACT_STRING
  port => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
node_idThe node id.
hostThe host name.
portThe port.
_tagged_fieldsThe tagged fields
FindCoordinator Response (Version: 4) => throttle_time_ms [coordinators] _tagged_fields 
  throttle_time_ms => INT32
  coordinators => key node_id host port error_code error_message _tagged_fields 
    key => COMPACT_STRING
    node_id => INT32
    host => COMPACT_STRING
    port => INT32
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
coordinatorsEach coordinator result in the response.
keyThe coordinator key.
node_idThe node id.
hostThe host name.
portThe port.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
FindCoordinator Response (Version: 5) => throttle_time_ms [coordinators] _tagged_fields 
  throttle_time_ms => INT32
  coordinators => key node_id host port error_code error_message _tagged_fields 
    key => COMPACT_STRING
    node_id => INT32
    host => COMPACT_STRING
    port => INT32
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
coordinatorsEach coordinator result in the response.
keyThe coordinator key.
node_idThe node id.
hostThe host name.
portThe port.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
JoinGroup API (Key: 11):
Requests:
JoinGroup Request (Version: 2) => group_id session_timeout_ms rebalance_timeout_ms member_id protocol_type [protocols] 
  group_id => STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => STRING
  protocol_type => STRING
  protocols => name metadata 
    name => STRING
    metadata => BYTES

Request header version: 1

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
JoinGroup Request (Version: 3) => group_id session_timeout_ms rebalance_timeout_ms member_id protocol_type [protocols] 
  group_id => STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => STRING
  protocol_type => STRING
  protocols => name metadata 
    name => STRING
    metadata => BYTES

Request header version: 1

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
JoinGroup Request (Version: 4) => group_id session_timeout_ms rebalance_timeout_ms member_id protocol_type [protocols] 
  group_id => STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => STRING
  protocol_type => STRING
  protocols => name metadata 
    name => STRING
    metadata => BYTES

Request header version: 1

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
JoinGroup Request (Version: 5) => group_id session_timeout_ms rebalance_timeout_ms member_id group_instance_id protocol_type [protocols] 
  group_id => STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => STRING
  group_instance_id => NULLABLE_STRING
  protocol_type => STRING
  protocols => name metadata 
    name => STRING
    metadata => BYTES

Request header version: 1

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
JoinGroup Request (Version: 6) => group_id session_timeout_ms rebalance_timeout_ms member_id group_instance_id protocol_type [protocols] _tagged_fields 
  group_id => COMPACT_STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  protocol_type => COMPACT_STRING
  protocols => name metadata _tagged_fields 
    name => COMPACT_STRING
    metadata => COMPACT_BYTES

Request header version: 2

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
JoinGroup Request (Version: 7) => group_id session_timeout_ms rebalance_timeout_ms member_id group_instance_id protocol_type [protocols] _tagged_fields 
  group_id => COMPACT_STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  protocol_type => COMPACT_STRING
  protocols => name metadata _tagged_fields 
    name => COMPACT_STRING
    metadata => COMPACT_BYTES

Request header version: 2

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
JoinGroup Request (Version: 8) => group_id session_timeout_ms rebalance_timeout_ms member_id group_instance_id protocol_type [protocols] reason _tagged_fields 
  group_id => COMPACT_STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  protocol_type => COMPACT_STRING
  protocols => name metadata _tagged_fields 
    name => COMPACT_STRING
    metadata => COMPACT_BYTES
  reason => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
_tagged_fieldsThe tagged fields
reasonThe reason why the member (re-)joins the group.
_tagged_fieldsThe tagged fields
JoinGroup Request (Version: 9) => group_id session_timeout_ms rebalance_timeout_ms member_id group_instance_id protocol_type [protocols] reason _tagged_fields 
  group_id => COMPACT_STRING
  session_timeout_ms => INT32
  rebalance_timeout_ms => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  protocol_type => COMPACT_STRING
  protocols => name metadata _tagged_fields 
    name => COMPACT_STRING
    metadata => COMPACT_BYTES
  reason => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
group_idThe group identifier.
session_timeout_msThe coordinator considers the consumer dead if it receives no heartbeat after this timeout in milliseconds.
rebalance_timeout_msThe maximum time in milliseconds that the coordinator will wait for each member to rejoin when rebalancing the group.
member_idThe member id assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
protocol_typeThe unique name the for class of protocols implemented by the group we want to join.
protocolsThe list of protocols that the member supports.
nameThe protocol name.
metadataThe protocol metadata.
_tagged_fieldsThe tagged fields
reasonThe reason why the member (re-)joins the group.
_tagged_fieldsThe tagged fields
Responses:
JoinGroup Response (Version: 2) => throttle_time_ms error_code generation_id protocol_name leader member_id [members] 
  throttle_time_ms => INT32
  error_code => INT16
  generation_id => INT32
  protocol_name => STRING
  leader => STRING
  member_id => STRING
  members => member_id metadata 
    member_id => STRING
    metadata => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
generation_idThe generation ID of the group.
protocol_nameThe group protocol selected by the coordinator.
leaderThe leader of the group.
member_idThe member ID assigned by the group coordinator.
membersThe group members.
member_idThe group member ID.
metadataThe group member metadata.
JoinGroup Response (Version: 3) => throttle_time_ms error_code generation_id protocol_name leader member_id [members] 
  throttle_time_ms => INT32
  error_code => INT16
  generation_id => INT32
  protocol_name => STRING
  leader => STRING
  member_id => STRING
  members => member_id metadata 
    member_id => STRING
    metadata => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
generation_idThe generation ID of the group.
protocol_nameThe group protocol selected by the coordinator.
leaderThe leader of the group.
member_idThe member ID assigned by the group coordinator.
membersThe group members.
member_idThe group member ID.
metadataThe group member metadata.
JoinGroup Response (Version: 4) => throttle_time_ms error_code generation_id protocol_name leader member_id [members] 
  throttle_time_ms => INT32
  error_code => INT16
  generation_id => INT32
  protocol_name => STRING
  leader => STRING
  member_id => STRING
  members => member_id metadata 
    member_id => STRING
    metadata => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
generation_idThe generation ID of the group.
protocol_nameThe group protocol selected by the coordinator.
leaderThe leader of the group.
member_idThe member ID assigned by the group coordinator.
membersThe group members.
member_idThe group member ID.
metadataThe group member metadata.
JoinGroup Response (Version: 5) => throttle_time_ms error_code generation_id protocol_name leader member_id [members] 
  throttle_time_ms => INT32
  error_code => INT16
  generation_id => INT32
  protocol_name => STRING
  leader => STRING
  member_id => STRING
  members => member_id group_instance_id metadata 
    member_id => STRING
    group_instance_id => NULLABLE_STRING
    metadata => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
generation_idThe generation ID of the group.
protocol_nameThe group protocol selected by the coordinator.
leaderThe leader of the group.
member_idThe member ID assigned by the group coordinator.
membersThe group members.
member_idThe group member ID.
group_instance_idThe unique identifier of the consumer instance provided by end user.
metadataThe group member metadata.
JoinGroup Response (Version: 6) => throttle_time_ms error_code generation_id protocol_name leader member_id [members] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  generation_id => INT32
  protocol_name => COMPACT_STRING
  leader => COMPACT_STRING
  member_id => COMPACT_STRING
  members => member_id group_instance_id metadata _tagged_fields 
    member_id => COMPACT_STRING
    group_instance_id => COMPACT_NULLABLE_STRING
    metadata => COMPACT_BYTES

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
generation_idThe generation ID of the group.
protocol_nameThe group protocol selected by the coordinator.
leaderThe leader of the group.
member_idThe member ID assigned by the group coordinator.
membersThe group members.
member_idThe group member ID.
group_instance_idThe unique identifier of the consumer instance provided by end user.
metadataThe group member metadata.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
JoinGroup Response (Version: 7) => throttle_time_ms error_code generation_id protocol_type protocol_name leader member_id [members] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  generation_id => INT32
  protocol_type => COMPACT_NULLABLE_STRING
  protocol_name => COMPACT_NULLABLE_STRING
  leader => COMPACT_STRING
  member_id => COMPACT_STRING
  members => member_id group_instance_id metadata _tagged_fields 
    member_id => COMPACT_STRING
    group_instance_id => COMPACT_NULLABLE_STRING
    metadata => COMPACT_BYTES

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
generation_idThe generation ID of the group.
protocol_typeThe group protocol name.
protocol_nameThe group protocol selected by the coordinator.
leaderThe leader of the group.
member_idThe member ID assigned by the group coordinator.
membersThe group members.
member_idThe group member ID.
group_instance_idThe unique identifier of the consumer instance provided by end user.
metadataThe group member metadata.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
JoinGroup Response (Version: 8) => throttle_time_ms error_code generation_id protocol_type protocol_name leader member_id [members] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  generation_id => INT32
  protocol_type => COMPACT_NULLABLE_STRING
  protocol_name => COMPACT_NULLABLE_STRING
  leader => COMPACT_STRING
  member_id => COMPACT_STRING
  members => member_id group_instance_id metadata _tagged_fields 
    member_id => COMPACT_STRING
    group_instance_id => COMPACT_NULLABLE_STRING
    metadata => COMPACT_BYTES

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
generation_idThe generation ID of the group.
protocol_typeThe group protocol name.
protocol_nameThe group protocol selected by the coordinator.
leaderThe leader of the group.
member_idThe member ID assigned by the group coordinator.
membersThe group members.
member_idThe group member ID.
group_instance_idThe unique identifier of the consumer instance provided by end user.
metadataThe group member metadata.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Heartbeat API (Key: 12):
Requests:
Heartbeat Request (Version: 0) => group_id generation_id member_id 
  group_id => STRING
  generation_id => INT32
  member_id => STRING

Request header version: 1

FieldDescription
group_idThe group id.
generation_idThe generation of the group.
member_idThe member ID.
Heartbeat Request (Version: 1) => group_id generation_id member_id 
  group_id => STRING
  generation_id => INT32
  member_id => STRING

Request header version: 1

FieldDescription
group_idThe group id.
generation_idThe generation of the group.
member_idThe member ID.
Heartbeat Request (Version: 2) => group_id generation_id member_id 
  group_id => STRING
  generation_id => INT32
  member_id => STRING

Request header version: 1

FieldDescription
group_idThe group id.
generation_idThe generation of the group.
member_idThe member ID.
Heartbeat Request (Version: 3) => group_id generation_id member_id group_instance_id 
  group_id => STRING
  generation_id => INT32
  member_id => STRING
  group_instance_id => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe group id.
generation_idThe generation of the group.
member_idThe member ID.
group_instance_idThe unique identifier of the consumer instance provided by end user.
Heartbeat Request (Version: 4) => group_id generation_id member_id group_instance_id _tagged_fields 
  group_id => COMPACT_STRING
  generation_id => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
group_idThe group id.
generation_idThe generation of the group.
member_idThe member ID.
group_instance_idThe unique identifier of the consumer instance provided by end user.
_tagged_fieldsThe tagged fields
Responses:
Heartbeat Response (Version: 0) => error_code 
  error_code => INT16

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
Heartbeat Response (Version: 1) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
Heartbeat Response (Version: 2) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
Heartbeat Response (Version: 3) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
LeaveGroup API (Key: 13):
Requests:
LeaveGroup Request (Version: 0) => group_id member_id 
  group_id => STRING
  member_id => STRING

Request header version: 1

FieldDescription
group_idThe ID of the group to leave.
member_idThe member ID to remove from the group.
LeaveGroup Request (Version: 1) => group_id member_id 
  group_id => STRING
  member_id => STRING

Request header version: 1

FieldDescription
group_idThe ID of the group to leave.
member_idThe member ID to remove from the group.
LeaveGroup Request (Version: 2) => group_id member_id 
  group_id => STRING
  member_id => STRING

Request header version: 1

FieldDescription
group_idThe ID of the group to leave.
member_idThe member ID to remove from the group.
LeaveGroup Request (Version: 3) => group_id [members] 
  group_id => STRING
  members => member_id group_instance_id 
    member_id => STRING
    group_instance_id => NULLABLE_STRING

Request header version: 1

FieldDescription
group_idThe ID of the group to leave.
membersList of leaving member identities.
member_idThe member ID to remove from the group.
group_instance_idThe group instance ID to remove from the group.
LeaveGroup Request (Version: 4) => group_id [members] _tagged_fields 
  group_id => COMPACT_STRING
  members => member_id group_instance_id _tagged_fields 
    member_id => COMPACT_STRING
    group_instance_id => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
group_idThe ID of the group to leave.
membersList of leaving member identities.
member_idThe member ID to remove from the group.
group_instance_idThe group instance ID to remove from the group.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
LeaveGroup Request (Version: 5) => group_id [members] _tagged_fields 
  group_id => COMPACT_STRING
  members => member_id group_instance_id reason _tagged_fields 
    member_id => COMPACT_STRING
    group_instance_id => COMPACT_NULLABLE_STRING
    reason => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
group_idThe ID of the group to leave.
membersList of leaving member identities.
member_idThe member ID to remove from the group.
group_instance_idThe group instance ID to remove from the group.
reasonThe reason why the member left the group.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
LeaveGroup Response (Version: 0) => error_code 
  error_code => INT16

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
LeaveGroup Response (Version: 1) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
LeaveGroup Response (Version: 2) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
LeaveGroup Response (Version: 3) => throttle_time_ms error_code [members] 
  throttle_time_ms => INT32
  error_code => INT16
  members => member_id group_instance_id error_code 
    member_id => STRING
    group_instance_id => NULLABLE_STRING
    error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
membersList of leaving member responses.
member_idThe member ID to remove from the group.
group_instance_idThe group instance ID to remove from the group.
error_codeThe error code, or 0 if there was no error.
LeaveGroup Response (Version: 4) => throttle_time_ms error_code [members] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  members => member_id group_instance_id error_code _tagged_fields 
    member_id => COMPACT_STRING
    group_instance_id => COMPACT_NULLABLE_STRING
    error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
membersList of leaving member responses.
member_idThe member ID to remove from the group.
group_instance_idThe group instance ID to remove from the group.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
SyncGroup API (Key: 14):
Requests:
SyncGroup Request (Version: 0) => group_id generation_id member_id [assignments] 
  group_id => STRING
  generation_id => INT32
  member_id => STRING
  assignments => member_id assignment 
    member_id => STRING
    assignment => BYTES

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_idThe generation of the group.
member_idThe member ID assigned by the group.
assignmentsEach assignment.
member_idThe ID of the member to assign.
assignmentThe member assignment.
SyncGroup Request (Version: 1) => group_id generation_id member_id [assignments] 
  group_id => STRING
  generation_id => INT32
  member_id => STRING
  assignments => member_id assignment 
    member_id => STRING
    assignment => BYTES

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_idThe generation of the group.
member_idThe member ID assigned by the group.
assignmentsEach assignment.
member_idThe ID of the member to assign.
assignmentThe member assignment.
SyncGroup Request (Version: 2) => group_id generation_id member_id [assignments] 
  group_id => STRING
  generation_id => INT32
  member_id => STRING
  assignments => member_id assignment 
    member_id => STRING
    assignment => BYTES

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_idThe generation of the group.
member_idThe member ID assigned by the group.
assignmentsEach assignment.
member_idThe ID of the member to assign.
assignmentThe member assignment.
SyncGroup Request (Version: 3) => group_id generation_id member_id group_instance_id [assignments] 
  group_id => STRING
  generation_id => INT32
  member_id => STRING
  group_instance_id => NULLABLE_STRING
  assignments => member_id assignment 
    member_id => STRING
    assignment => BYTES

Request header version: 1

FieldDescription
group_idThe unique group identifier.
generation_idThe generation of the group.
member_idThe member ID assigned by the group.
group_instance_idThe unique identifier of the consumer instance provided by end user.
assignmentsEach assignment.
member_idThe ID of the member to assign.
assignmentThe member assignment.
SyncGroup Request (Version: 4) => group_id generation_id member_id group_instance_id [assignments] _tagged_fields 
  group_id => COMPACT_STRING
  generation_id => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  assignments => member_id assignment _tagged_fields 
    member_id => COMPACT_STRING
    assignment => COMPACT_BYTES

Request header version: 2

FieldDescription
group_idThe unique group identifier.
generation_idThe generation of the group.
member_idThe member ID assigned by the group.
group_instance_idThe unique identifier of the consumer instance provided by end user.
assignmentsEach assignment.
member_idThe ID of the member to assign.
assignmentThe member assignment.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
SyncGroup Request (Version: 5) => group_id generation_id member_id group_instance_id protocol_type protocol_name [assignments] _tagged_fields 
  group_id => COMPACT_STRING
  generation_id => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  protocol_type => COMPACT_NULLABLE_STRING
  protocol_name => COMPACT_NULLABLE_STRING
  assignments => member_id assignment _tagged_fields 
    member_id => COMPACT_STRING
    assignment => COMPACT_BYTES

Request header version: 2

FieldDescription
group_idThe unique group identifier.
generation_idThe generation of the group.
member_idThe member ID assigned by the group.
group_instance_idThe unique identifier of the consumer instance provided by end user.
protocol_typeThe group protocol type.
protocol_nameThe group protocol name.
assignmentsEach assignment.
member_idThe ID of the member to assign.
assignmentThe member assignment.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
SyncGroup Response (Version: 0) => error_code assignment 
  error_code => INT16
  assignment => BYTES

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
assignmentThe member assignment.
SyncGroup Response (Version: 1) => throttle_time_ms error_code assignment 
  throttle_time_ms => INT32
  error_code => INT16
  assignment => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
assignmentThe member assignment.
SyncGroup Response (Version: 2) => throttle_time_ms error_code assignment 
  throttle_time_ms => INT32
  error_code => INT16
  assignment => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
assignmentThe member assignment.
SyncGroup Response (Version: 3) => throttle_time_ms error_code assignment 
  throttle_time_ms => INT32
  error_code => INT16
  assignment => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
assignmentThe member assignment.
SyncGroup Response (Version: 4) => throttle_time_ms error_code assignment _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  assignment => COMPACT_BYTES

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
assignmentThe member assignment.
_tagged_fieldsThe tagged fields
DescribeGroups API (Key: 15):
Requests:
DescribeGroups Request (Version: 0) => [groups] 
  groups => STRING

Request header version: 1

FieldDescription
groupsThe names of the groups to describe.
DescribeGroups Request (Version: 1) => [groups] 
  groups => STRING

Request header version: 1

FieldDescription
groupsThe names of the groups to describe.
DescribeGroups Request (Version: 2) => [groups] 
  groups => STRING

Request header version: 1

FieldDescription
groupsThe names of the groups to describe.
DescribeGroups Request (Version: 3) => [groups] include_authorized_operations 
  groups => STRING
  include_authorized_operations => BOOLEAN

Request header version: 1

FieldDescription
groupsThe names of the groups to describe.
include_authorized_operationsWhether to include authorized operations.
DescribeGroups Request (Version: 4) => [groups] include_authorized_operations 
  groups => STRING
  include_authorized_operations => BOOLEAN

Request header version: 1

FieldDescription
groupsThe names of the groups to describe.
include_authorized_operationsWhether to include authorized operations.
DescribeGroups Request (Version: 5) => [groups] include_authorized_operations _tagged_fields 
  groups => COMPACT_STRING
  include_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
groupsThe names of the groups to describe.
include_authorized_operationsWhether to include authorized operations.
_tagged_fieldsThe tagged fields
DescribeGroups Request (Version: 6) => [groups] include_authorized_operations _tagged_fields 
  groups => COMPACT_STRING
  include_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
groupsThe names of the groups to describe.
include_authorized_operationsWhether to include authorized operations.
_tagged_fieldsThe tagged fields
Responses:
DescribeGroups Response (Version: 0) => [groups] 
  groups => error_code group_id group_state protocol_type protocol_data [members] 
    error_code => INT16
    group_id => STRING
    group_state => STRING
    protocol_type => STRING
    protocol_data => STRING
    members => member_id client_id client_host member_metadata member_assignment 
      member_id => STRING
      client_id => STRING
      client_host => STRING
      member_metadata => BYTES
      member_assignment => BYTES

Response header version: 0

FieldDescription
groupsEach described group.
error_codeThe describe error, or 0 if there was no error.
group_idThe group ID string.
group_stateThe group state string, or the empty string.
protocol_typeThe group protocol type, or the empty string.
protocol_dataThe group protocol data, or the empty string.
membersThe group members.
member_idThe member id.
client_idThe client ID used in the member's latest join group request.
client_hostThe client host.
member_metadataThe metadata corresponding to the current group protocol in use.
member_assignmentThe current assignment provided by the group leader.
DescribeGroups Response (Version: 1) => throttle_time_ms [groups] 
  throttle_time_ms => INT32
  groups => error_code group_id group_state protocol_type protocol_data [members] 
    error_code => INT16
    group_id => STRING
    group_state => STRING
    protocol_type => STRING
    protocol_data => STRING
    members => member_id client_id client_host member_metadata member_assignment 
      member_id => STRING
      client_id => STRING
      client_host => STRING
      member_metadata => BYTES
      member_assignment => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
groupsEach described group.
error_codeThe describe error, or 0 if there was no error.
group_idThe group ID string.
group_stateThe group state string, or the empty string.
protocol_typeThe group protocol type, or the empty string.
protocol_dataThe group protocol data, or the empty string.
membersThe group members.
member_idThe member id.
client_idThe client ID used in the member's latest join group request.
client_hostThe client host.
member_metadataThe metadata corresponding to the current group protocol in use.
member_assignmentThe current assignment provided by the group leader.
DescribeGroups Response (Version: 2) => throttle_time_ms [groups] 
  throttle_time_ms => INT32
  groups => error_code group_id group_state protocol_type protocol_data [members] 
    error_code => INT16
    group_id => STRING
    group_state => STRING
    protocol_type => STRING
    protocol_data => STRING
    members => member_id client_id client_host member_metadata member_assignment 
      member_id => STRING
      client_id => STRING
      client_host => STRING
      member_metadata => BYTES
      member_assignment => BYTES

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
groupsEach described group.
error_codeThe describe error, or 0 if there was no error.
group_idThe group ID string.
group_stateThe group state string, or the empty string.
protocol_typeThe group protocol type, or the empty string.
protocol_dataThe group protocol data, or the empty string.
membersThe group members.
member_idThe member id.
client_idThe client ID used in the member's latest join group request.
client_hostThe client host.
member_metadataThe metadata corresponding to the current group protocol in use.
member_assignmentThe current assignment provided by the group leader.
DescribeGroups Response (Version: 3) => throttle_time_ms [groups] 
  throttle_time_ms => INT32
  groups => error_code group_id group_state protocol_type protocol_data [members] authorized_operations 
    error_code => INT16
    group_id => STRING
    group_state => STRING
    protocol_type => STRING
    protocol_data => STRING
    members => member_id client_id client_host member_metadata member_assignment 
      member_id => STRING
      client_id => STRING
      client_host => STRING
      member_metadata => BYTES
      member_assignment => BYTES
    authorized_operations => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
groupsEach described group.
error_codeThe describe error, or 0 if there was no error.
group_idThe group ID string.
group_stateThe group state string, or the empty string.
protocol_typeThe group protocol type, or the empty string.
protocol_dataThe group protocol data, or the empty string.
membersThe group members.
member_idThe member id.
client_idThe client ID used in the member's latest join group request.
client_hostThe client host.
member_metadataThe metadata corresponding to the current group protocol in use.
member_assignmentThe current assignment provided by the group leader.
authorized_operations32-bit bitfield to represent authorized operations for this group.
DescribeGroups Response (Version: 4) => throttle_time_ms [groups] 
  throttle_time_ms => INT32
  groups => error_code group_id group_state protocol_type protocol_data [members] authorized_operations 
    error_code => INT16
    group_id => STRING
    group_state => STRING
    protocol_type => STRING
    protocol_data => STRING
    members => member_id group_instance_id client_id client_host member_metadata member_assignment 
      member_id => STRING
      group_instance_id => NULLABLE_STRING
      client_id => STRING
      client_host => STRING
      member_metadata => BYTES
      member_assignment => BYTES
    authorized_operations => INT32

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
groupsEach described group.
error_codeThe describe error, or 0 if there was no error.
group_idThe group ID string.
group_stateThe group state string, or the empty string.
protocol_typeThe group protocol type, or the empty string.
protocol_dataThe group protocol data, or the empty string.
membersThe group members.
member_idThe member id.
group_instance_idThe unique identifier of the consumer instance provided by end user.
client_idThe client ID used in the member's latest join group request.
client_hostThe client host.
member_metadataThe metadata corresponding to the current group protocol in use.
member_assignmentThe current assignment provided by the group leader.
authorized_operations32-bit bitfield to represent authorized operations for this group.
DescribeGroups Response (Version: 5) => throttle_time_ms [groups] _tagged_fields 
  throttle_time_ms => INT32
  groups => error_code group_id group_state protocol_type protocol_data [members] authorized_operations _tagged_fields 
    error_code => INT16
    group_id => COMPACT_STRING
    group_state => COMPACT_STRING
    protocol_type => COMPACT_STRING
    protocol_data => COMPACT_STRING
    members => member_id group_instance_id client_id client_host member_metadata member_assignment _tagged_fields 
      member_id => COMPACT_STRING
      group_instance_id => COMPACT_NULLABLE_STRING
      client_id => COMPACT_STRING
      client_host => COMPACT_STRING
      member_metadata => COMPACT_BYTES
      member_assignment => COMPACT_BYTES
    authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
groupsEach described group.
error_codeThe describe error, or 0 if there was no error.
group_idThe group ID string.
group_stateThe group state string, or the empty string.
protocol_typeThe group protocol type, or the empty string.
protocol_dataThe group protocol data, or the empty string.
membersThe group members.
member_idThe member id.
group_instance_idThe unique identifier of the consumer instance provided by end user.
client_idThe client ID used in the member's latest join group request.
client_hostThe client host.
member_metadataThe metadata corresponding to the current group protocol in use.
member_assignmentThe current assignment provided by the group leader.
_tagged_fieldsThe tagged fields
authorized_operations32-bit bitfield to represent authorized operations for this group.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListGroups API (Key: 16):
Requests:
ListGroups Request (Version: 0) => 

Request header version: 1

FieldDescription
ListGroups Request (Version: 1) => 

Request header version: 1

FieldDescription
ListGroups Request (Version: 2) => 

Request header version: 1

FieldDescription
ListGroups Request (Version: 3) => _tagged_fields 

Request header version: 2

FieldDescription
_tagged_fieldsThe tagged fields
ListGroups Request (Version: 4) => [states_filter] _tagged_fields 
  states_filter => COMPACT_STRING

Request header version: 2

FieldDescription
states_filterThe states of the groups we want to list. If empty, all groups are returned with their state.
_tagged_fieldsThe tagged fields
ListGroups Request (Version: 5) => [states_filter] [types_filter] _tagged_fields 
  states_filter => COMPACT_STRING
  types_filter => COMPACT_STRING

Request header version: 2

FieldDescription
states_filterThe states of the groups we want to list. If empty, all groups are returned with their state.
types_filterThe types of the groups we want to list. If empty, all groups are returned with their type.
_tagged_fieldsThe tagged fields
Responses:
ListGroups Response (Version: 0) => error_code [groups] 
  error_code => INT16
  groups => group_id protocol_type 
    group_id => STRING
    protocol_type => STRING

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
groupsEach group in the response.
group_idThe group ID.
protocol_typeThe group protocol type.
ListGroups Response (Version: 1) => throttle_time_ms error_code [groups] 
  throttle_time_ms => INT32
  error_code => INT16
  groups => group_id protocol_type 
    group_id => STRING
    protocol_type => STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
groupsEach group in the response.
group_idThe group ID.
protocol_typeThe group protocol type.
ListGroups Response (Version: 2) => throttle_time_ms error_code [groups] 
  throttle_time_ms => INT32
  error_code => INT16
  groups => group_id protocol_type 
    group_id => STRING
    protocol_type => STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
groupsEach group in the response.
group_idThe group ID.
protocol_typeThe group protocol type.
ListGroups Response (Version: 3) => throttle_time_ms error_code [groups] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  groups => group_id protocol_type _tagged_fields 
    group_id => COMPACT_STRING
    protocol_type => COMPACT_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
groupsEach group in the response.
group_idThe group ID.
protocol_typeThe group protocol type.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ListGroups Response (Version: 4) => throttle_time_ms error_code [groups] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  groups => group_id protocol_type group_state _tagged_fields 
    group_id => COMPACT_STRING
    protocol_type => COMPACT_STRING
    group_state => COMPACT_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
groupsEach group in the response.
group_idThe group ID.
protocol_typeThe group protocol type.
group_stateThe group state name.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
SaslHandshake API (Key: 17):
Requests:
SaslHandshake Request (Version: 0) => mechanism 
  mechanism => STRING

Request header version: 1

FieldDescription
mechanismThe SASL mechanism chosen by the client.
SaslHandshake Request (Version: 1) => mechanism 
  mechanism => STRING

Request header version: 1

FieldDescription
mechanismThe SASL mechanism chosen by the client.
Responses:
SaslHandshake Response (Version: 0) => error_code [mechanisms] 
  error_code => INT16
  mechanisms => STRING

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
mechanismsThe mechanisms enabled in the server.
ApiVersions API (Key: 18):
Requests:
ApiVersions Request (Version: 0) => 

Request header version: 1

FieldDescription
ApiVersions Request (Version: 1) => 

Request header version: 1

FieldDescription
ApiVersions Request (Version: 2) => 

Request header version: 1

FieldDescription
ApiVersions Request (Version: 3) => client_software_name client_software_version _tagged_fields 
  client_software_name => COMPACT_STRING
  client_software_version => COMPACT_STRING

Request header version: 2

FieldDescription
client_software_nameThe name of the client.
client_software_versionThe version of the client.
_tagged_fieldsThe tagged fields
ApiVersions Request (Version: 4) => client_software_name client_software_version _tagged_fields 
  client_software_name => COMPACT_STRING
  client_software_version => COMPACT_STRING

Request header version: 2

FieldDescription
client_software_nameThe name of the client.
client_software_versionThe version of the client.
_tagged_fieldsThe tagged fields
Responses:
ApiVersions Response (Version: 0) => error_code [api_keys] 
  error_code => INT16
  api_keys => api_key min_version max_version 
    api_key => INT16
    min_version => INT16
    max_version => INT16

Response header version: 0

FieldDescription
error_codeThe top-level error code.
api_keysThe APIs supported by the broker.
api_keyThe API index.
min_versionThe minimum supported version, inclusive.
max_versionThe maximum supported version, inclusive.
ApiVersions Response (Version: 1) => error_code [api_keys] throttle_time_ms 
  error_code => INT16
  api_keys => api_key min_version max_version 
    api_key => INT16
    min_version => INT16
    max_version => INT16
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
error_codeThe top-level error code.
api_keysThe APIs supported by the broker.
api_keyThe API index.
min_versionThe minimum supported version, inclusive.
max_versionThe maximum supported version, inclusive.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
ApiVersions Response (Version: 2) => error_code [api_keys] throttle_time_ms 
  error_code => INT16
  api_keys => api_key min_version max_version 
    api_key => INT16
    min_version => INT16
    max_version => INT16
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
error_codeThe top-level error code.
api_keysThe APIs supported by the broker.
api_keyThe API index.
min_versionThe minimum supported version, inclusive.
max_versionThe maximum supported version, inclusive.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
ApiVersions Response (Version: 3) => error_code [api_keys] throttle_time_ms _tagged_fields 
  error_code => INT16
  api_keys => api_key min_version max_version _tagged_fields 
    api_key => INT16
    min_version => INT16
    max_version => INT16
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
error_codeThe top-level error code.
api_keysThe APIs supported by the broker.
api_keyThe API index.
min_versionThe minimum supported version, inclusive.
max_versionThe maximum supported version, inclusive.
_tagged_fieldsThe tagged fields
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
_tagged_fields
TagTagged fieldDescription
0supported_featuresFeatures supported by the broker. Note: in v0-v3, features with MinSupportedVersion = 0 are omitted.
FieldDescription
nameThe name of the feature.
min_versionThe minimum supported version for the feature.
max_versionThe maximum supported version for the feature.
_tagged_fieldsThe tagged fields
1finalized_features_epochThe monotonically increasing epoch for the finalized features information. Valid values are >= 0. A value of -1 is special and represents unknown epoch.
2finalized_featuresList of cluster-wide finalized features. The information is valid only if FinalizedFeaturesEpoch >= 0.
FieldDescription
nameThe name of the feature.
max_version_levelThe cluster-wide finalized max version level for the feature.
min_version_levelThe cluster-wide finalized min version level for the feature.
_tagged_fieldsThe tagged fields
3zk_migration_readySet by a KRaft controller if the required configurations for ZK migration are present.
CreateTopics API (Key: 19):
Requests:
CreateTopics Request (Version: 2) => [topics] timeout_ms validate_only 
  topics => name num_partitions replication_factor [assignments] [configs] 
    name => STRING
    num_partitions => INT32
    replication_factor => INT16
    assignments => partition_index [broker_ids] 
      partition_index => INT32
      broker_ids => INT32
    configs => name value 
      name => STRING
      value => NULLABLE_STRING
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to create.
nameThe topic name.
num_partitionsThe number of partitions to create in the topic, or -1 if we are either specifying a manual partition assignment or using the default partitions.
replication_factorThe number of replicas to create for each partition in the topic, or -1 if we are either specifying a manual partition assignment or using the default replication factor.
assignmentsThe manual partition assignment, or the empty array if we are using automatic assignment.
partition_indexThe partition index.
broker_idsThe brokers to place the partition on.
configsThe custom topic configurations to set.
nameThe configuration name.
valueThe configuration value.
timeout_msHow long to wait in milliseconds before timing out the request.
validate_onlyIf true, check that the topics can be created as specified, but don't create anything.
CreateTopics Request (Version: 3) => [topics] timeout_ms validate_only 
  topics => name num_partitions replication_factor [assignments] [configs] 
    name => STRING
    num_partitions => INT32
    replication_factor => INT16
    assignments => partition_index [broker_ids] 
      partition_index => INT32
      broker_ids => INT32
    configs => name value 
      name => STRING
      value => NULLABLE_STRING
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to create.
nameThe topic name.
num_partitionsThe number of partitions to create in the topic, or -1 if we are either specifying a manual partition assignment or using the default partitions.
replication_factorThe number of replicas to create for each partition in the topic, or -1 if we are either specifying a manual partition assignment or using the default replication factor.
assignmentsThe manual partition assignment, or the empty array if we are using automatic assignment.
partition_indexThe partition index.
broker_idsThe brokers to place the partition on.
configsThe custom topic configurations to set.
nameThe configuration name.
valueThe configuration value.
timeout_msHow long to wait in milliseconds before timing out the request.
validate_onlyIf true, check that the topics can be created as specified, but don't create anything.
CreateTopics Request (Version: 4) => [topics] timeout_ms validate_only 
  topics => name num_partitions replication_factor [assignments] [configs] 
    name => STRING
    num_partitions => INT32
    replication_factor => INT16
    assignments => partition_index [broker_ids] 
      partition_index => INT32
      broker_ids => INT32
    configs => name value 
      name => STRING
      value => NULLABLE_STRING
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
topicsThe topics to create.
nameThe topic name.
num_partitionsThe number of partitions to create in the topic, or -1 if we are either specifying a manual partition assignment or using the default partitions.
replication_factorThe number of replicas to create for each partition in the topic, or -1 if we are either specifying a manual partition assignment or using the default replication factor.
assignmentsThe manual partition assignment, or the empty array if we are using automatic assignment.
partition_indexThe partition index.
broker_idsThe brokers to place the partition on.
configsThe custom topic configurations to set.
nameThe configuration name.
valueThe configuration value.
timeout_msHow long to wait in milliseconds before timing out the request.
validate_onlyIf true, check that the topics can be created as specified, but don't create anything.
CreateTopics Request (Version: 5) => [topics] timeout_ms validate_only _tagged_fields 
  topics => name num_partitions replication_factor [assignments] [configs] _tagged_fields 
    name => COMPACT_STRING
    num_partitions => INT32
    replication_factor => INT16
    assignments => partition_index [broker_ids] _tagged_fields 
      partition_index => INT32
      broker_ids => INT32
    configs => name value _tagged_fields 
      name => COMPACT_STRING
      value => COMPACT_NULLABLE_STRING
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to create.
nameThe topic name.
num_partitionsThe number of partitions to create in the topic, or -1 if we are either specifying a manual partition assignment or using the default partitions.
replication_factorThe number of replicas to create for each partition in the topic, or -1 if we are either specifying a manual partition assignment or using the default replication factor.
assignmentsThe manual partition assignment, or the empty array if we are using automatic assignment.
partition_indexThe partition index.
broker_idsThe brokers to place the partition on.
_tagged_fieldsThe tagged fields
configsThe custom topic configurations to set.
nameThe configuration name.
valueThe configuration value.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
timeout_msHow long to wait in milliseconds before timing out the request.
validate_onlyIf true, check that the topics can be created as specified, but don't create anything.
_tagged_fieldsThe tagged fields
CreateTopics Request (Version: 6) => [topics] timeout_ms validate_only _tagged_fields 
  topics => name num_partitions replication_factor [assignments] [configs] _tagged_fields 
    name => COMPACT_STRING
    num_partitions => INT32
    replication_factor => INT16
    assignments => partition_index [broker_ids] _tagged_fields 
      partition_index => INT32
      broker_ids => INT32
    configs => name value _tagged_fields 
      name => COMPACT_STRING
      value => COMPACT_NULLABLE_STRING
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to create.
nameThe topic name.
num_partitionsThe number of partitions to create in the topic, or -1 if we are either specifying a manual partition assignment or using the default partitions.
replication_factorThe number of replicas to create for each partition in the topic, or -1 if we are either specifying a manual partition assignment or using the default replication factor.
assignmentsThe manual partition assignment, or the empty array if we are using automatic assignment.
partition_indexThe partition index.
broker_idsThe brokers to place the partition on.
_tagged_fieldsThe tagged fields
configsThe custom topic configurations to set.
nameThe configuration name.
valueThe configuration value.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
timeout_msHow long to wait in milliseconds before timing out the request.
validate_onlyIf true, check that the topics can be created as specified, but don't create anything.
_tagged_fieldsThe tagged fields
CreateTopics Request (Version: 7) => [topics] timeout_ms validate_only _tagged_fields 
  topics => name num_partitions replication_factor [assignments] [configs] _tagged_fields 
    name => COMPACT_STRING
    num_partitions => INT32
    replication_factor => INT16
    assignments => partition_index [broker_ids] _tagged_fields 
      partition_index => INT32
      broker_ids => INT32
    configs => name value _tagged_fields 
      name => COMPACT_STRING
      value => COMPACT_NULLABLE_STRING
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
topicsThe topics to create.
nameThe topic name.
num_partitionsThe number of partitions to create in the topic, or -1 if we are either specifying a manual partition assignment or using the default partitions.
replication_factorThe number of replicas to create for each partition in the topic, or -1 if we are either specifying a manual partition assignment or using the default replication factor.
assignmentsThe manual partition assignment, or the empty array if we are using automatic assignment.
partition_indexThe partition index.
broker_idsThe brokers to place the partition on.
_tagged_fieldsThe tagged fields
configsThe custom topic configurations to set.
nameThe configuration name.
valueThe configuration value.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
timeout_msHow long to wait in milliseconds before timing out the request.
validate_onlyIf true, check that the topics can be created as specified, but don't create anything.
_tagged_fieldsThe tagged fields
Responses:
CreateTopics Response (Version: 2) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name error_code error_message 
    name => STRING
    error_code => INT16
    error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsResults for each topic we tried to create.
nameThe topic name.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
CreateTopics Response (Version: 3) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name error_code error_message 
    name => STRING
    error_code => INT16
    error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsResults for each topic we tried to create.
nameThe topic name.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
CreateTopics Response (Version: 4) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name error_code error_message 
    name => STRING
    error_code => INT16
    error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsResults for each topic we tried to create.
nameThe topic name.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
CreateTopics Response (Version: 5) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name error_code error_message num_partitions replication_factor [configs] _tagged_fields 
    name => COMPACT_STRING
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING
    num_partitions => INT32
    replication_factor => INT16
    configs => name value read_only config_source is_sensitive _tagged_fields 
      name => COMPACT_STRING
      value => COMPACT_NULLABLE_STRING
      read_only => BOOLEAN
      config_source => INT8
      is_sensitive => BOOLEAN

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsResults for each topic we tried to create.
nameThe topic name.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
num_partitionsNumber of partitions of the topic.
replication_factorReplication factor of the topic.
configsConfiguration of the topic.
nameThe configuration name.
valueThe configuration value.
read_onlyTrue if the configuration is read-only.
config_sourceThe configuration source.
is_sensitiveTrue if this configuration is sensitive.
_tagged_fieldsThe tagged fields
_tagged_fields
TagTagged fieldDescription
0topic_config_error_codeOptional topic config error returned if configs are not returned in the response.
_tagged_fieldsThe tagged fields
CreateTopics Response (Version: 6) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name error_code error_message num_partitions replication_factor [configs] _tagged_fields 
    name => COMPACT_STRING
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING
    num_partitions => INT32
    replication_factor => INT16
    configs => name value read_only config_source is_sensitive _tagged_fields 
      name => COMPACT_STRING
      value => COMPACT_NULLABLE_STRING
      read_only => BOOLEAN
      config_source => INT8
      is_sensitive => BOOLEAN

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsResults for each topic we tried to create.
nameThe topic name.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
num_partitionsNumber of partitions of the topic.
replication_factorReplication factor of the topic.
configsConfiguration of the topic.
nameThe configuration name.
valueThe configuration value.
read_onlyTrue if the configuration is read-only.
config_sourceThe configuration source.
is_sensitiveTrue if this configuration is sensitive.
_tagged_fieldsThe tagged fields
_tagged_fields
TagTagged fieldDescription
0topic_config_error_codeOptional topic config error returned if configs are not returned in the response.
_tagged_fieldsThe tagged fields
DeleteTopics API (Key: 20):
Requests:
DeleteTopics Request (Version: 1) => [topic_names] timeout_ms 
  topic_names => STRING
  timeout_ms => INT32

Request header version: 1

FieldDescription
topic_namesThe names of the topics to delete.
timeout_msThe length of time in milliseconds to wait for the deletions to complete.
DeleteTopics Request (Version: 2) => [topic_names] timeout_ms 
  topic_names => STRING
  timeout_ms => INT32

Request header version: 1

FieldDescription
topic_namesThe names of the topics to delete.
timeout_msThe length of time in milliseconds to wait for the deletions to complete.
DeleteTopics Request (Version: 3) => [topic_names] timeout_ms 
  topic_names => STRING
  timeout_ms => INT32

Request header version: 1

FieldDescription
topic_namesThe names of the topics to delete.
timeout_msThe length of time in milliseconds to wait for the deletions to complete.
DeleteTopics Request (Version: 4) => [topic_names] timeout_ms _tagged_fields 
  topic_names => COMPACT_STRING
  timeout_ms => INT32

Request header version: 2

FieldDescription
topic_namesThe names of the topics to delete.
timeout_msThe length of time in milliseconds to wait for the deletions to complete.
_tagged_fieldsThe tagged fields
DeleteTopics Request (Version: 5) => [topic_names] timeout_ms _tagged_fields 
  topic_names => COMPACT_STRING
  timeout_ms => INT32

Request header version: 2

FieldDescription
topic_namesThe names of the topics to delete.
timeout_msThe length of time in milliseconds to wait for the deletions to complete.
_tagged_fieldsThe tagged fields
DeleteTopics Request (Version: 6) => [topics] timeout_ms _tagged_fields 
  topics => name topic_id _tagged_fields 
    name => COMPACT_NULLABLE_STRING
    topic_id => UUID
  timeout_ms => INT32

Request header version: 2

FieldDescription
topicsThe name or topic ID of the topic.
nameThe topic name.
topic_idThe unique topic ID.
_tagged_fieldsThe tagged fields
timeout_msThe length of time in milliseconds to wait for the deletions to complete.
_tagged_fieldsThe tagged fields
Responses:
DeleteTopics Response (Version: 1) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => name error_code 
    name => STRING
    error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe results for each topic we tried to delete.
nameThe topic name.
error_codeThe deletion error, or 0 if the deletion succeeded.
DeleteTopics Response (Version: 2) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => name error_code 
    name => STRING
    error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe results for each topic we tried to delete.
nameThe topic name.
error_codeThe deletion error, or 0 if the deletion succeeded.
DeleteTopics Response (Version: 3) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => name error_code 
    name => STRING
    error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe results for each topic we tried to delete.
nameThe topic name.
error_codeThe deletion error, or 0 if the deletion succeeded.
DeleteTopics Response (Version: 4) => throttle_time_ms [responses] _tagged_fields 
  throttle_time_ms => INT32
  responses => name error_code _tagged_fields 
    name => COMPACT_STRING
    error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe results for each topic we tried to delete.
nameThe topic name.
error_codeThe deletion error, or 0 if the deletion succeeded.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DeleteTopics Response (Version: 5) => throttle_time_ms [responses] _tagged_fields 
  throttle_time_ms => INT32
  responses => name error_code error_message _tagged_fields 
    name => COMPACT_STRING
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe results for each topic we tried to delete.
nameThe topic name.
error_codeThe deletion error, or 0 if the deletion succeeded.
error_messageThe error message, or null if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DeleteRecords API (Key: 21):
Requests:
DeleteRecords Request (Version: 0) => [topics] timeout_ms 
  topics => name [partitions] 
    name => STRING
    partitions => partition_index offset 
      partition_index => INT32
      offset => INT64
  timeout_ms => INT32

Request header version: 1

FieldDescription
topicsEach topic that we want to delete records from.
nameThe topic name.
partitionsEach partition that we want to delete records from.
partition_indexThe partition index.
offsetThe deletion offset.
timeout_msHow long to wait for the deletion to complete, in milliseconds.
DeleteRecords Request (Version: 1) => [topics] timeout_ms 
  topics => name [partitions] 
    name => STRING
    partitions => partition_index offset 
      partition_index => INT32
      offset => INT64
  timeout_ms => INT32

Request header version: 1

FieldDescription
topicsEach topic that we want to delete records from.
nameThe topic name.
partitionsEach partition that we want to delete records from.
partition_indexThe partition index.
offsetThe deletion offset.
timeout_msHow long to wait for the deletion to complete, in milliseconds.
DeleteRecords Request (Version: 2) => [topics] timeout_ms _tagged_fields 
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index offset _tagged_fields 
      partition_index => INT32
      offset => INT64
  timeout_ms => INT32

Request header version: 2

FieldDescription
topicsEach topic that we want to delete records from.
nameThe topic name.
partitionsEach partition that we want to delete records from.
partition_indexThe partition index.
offsetThe deletion offset.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
timeout_msHow long to wait for the deletion to complete, in milliseconds.
_tagged_fieldsThe tagged fields
Responses:
DeleteRecords Response (Version: 0) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index low_watermark error_code 
      partition_index => INT32
      low_watermark => INT64
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic that we wanted to delete records from.
nameThe topic name.
partitionsEach partition that we wanted to delete records from.
partition_indexThe partition index.
low_watermarkThe partition low water mark.
error_codeThe deletion error code, or 0 if the deletion succeeded.
DeleteRecords Response (Version: 1) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index low_watermark error_code 
      partition_index => INT32
      low_watermark => INT64
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic that we wanted to delete records from.
nameThe topic name.
partitionsEach partition that we wanted to delete records from.
partition_indexThe partition index.
low_watermarkThe partition low water mark.
error_codeThe deletion error code, or 0 if the deletion succeeded.
InitProducerId API (Key: 22):
Requests:
InitProducerId Request (Version: 0) => transactional_id transaction_timeout_ms 
  transactional_id => NULLABLE_STRING
  transaction_timeout_ms => INT32

Request header version: 1

FieldDescription
transactional_idThe transactional id, or null if the producer is not transactional.
transaction_timeout_msThe time in ms to wait before aborting idle transactions sent by this producer. This is only relevant if a TransactionalId has been defined.
InitProducerId Request (Version: 1) => transactional_id transaction_timeout_ms 
  transactional_id => NULLABLE_STRING
  transaction_timeout_ms => INT32

Request header version: 1

FieldDescription
transactional_idThe transactional id, or null if the producer is not transactional.
transaction_timeout_msThe time in ms to wait before aborting idle transactions sent by this producer. This is only relevant if a TransactionalId has been defined.
InitProducerId Request (Version: 2) => transactional_id transaction_timeout_ms _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  transaction_timeout_ms => INT32

Request header version: 2

FieldDescription
transactional_idThe transactional id, or null if the producer is not transactional.
transaction_timeout_msThe time in ms to wait before aborting idle transactions sent by this producer. This is only relevant if a TransactionalId has been defined.
_tagged_fieldsThe tagged fields
InitProducerId Request (Version: 3) => transactional_id transaction_timeout_ms producer_id producer_epoch _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  transaction_timeout_ms => INT32
  producer_id => INT64
  producer_epoch => INT16

Request header version: 2

FieldDescription
transactional_idThe transactional id, or null if the producer is not transactional.
transaction_timeout_msThe time in ms to wait before aborting idle transactions sent by this producer. This is only relevant if a TransactionalId has been defined.
producer_idThe producer id. This is used to disambiguate requests if a transactional id is reused following its expiration.
producer_epochThe producer's current epoch. This will be checked against the producer epoch on the broker, and the request will return an error if they do not match.
_tagged_fieldsThe tagged fields
InitProducerId Request (Version: 4) => transactional_id transaction_timeout_ms producer_id producer_epoch _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  transaction_timeout_ms => INT32
  producer_id => INT64
  producer_epoch => INT16

Request header version: 2

FieldDescription
transactional_idThe transactional id, or null if the producer is not transactional.
transaction_timeout_msThe time in ms to wait before aborting idle transactions sent by this producer. This is only relevant if a TransactionalId has been defined.
producer_idThe producer id. This is used to disambiguate requests if a transactional id is reused following its expiration.
producer_epochThe producer's current epoch. This will be checked against the producer epoch on the broker, and the request will return an error if they do not match.
_tagged_fieldsThe tagged fields
InitProducerId Request (Version: 5) => transactional_id transaction_timeout_ms producer_id producer_epoch _tagged_fields 
  transactional_id => COMPACT_NULLABLE_STRING
  transaction_timeout_ms => INT32
  producer_id => INT64
  producer_epoch => INT16

Request header version: 2

FieldDescription
transactional_idThe transactional id, or null if the producer is not transactional.
transaction_timeout_msThe time in ms to wait before aborting idle transactions sent by this producer. This is only relevant if a TransactionalId has been defined.
producer_idThe producer id. This is used to disambiguate requests if a transactional id is reused following its expiration.
producer_epochThe producer's current epoch. This will be checked against the producer epoch on the broker, and the request will return an error if they do not match.
_tagged_fieldsThe tagged fields
Responses:
InitProducerId Response (Version: 0) => throttle_time_ms error_code producer_id producer_epoch 
  throttle_time_ms => INT32
  error_code => INT16
  producer_id => INT64
  producer_epoch => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
producer_idThe current producer id.
producer_epochThe current epoch associated with the producer id.
InitProducerId Response (Version: 1) => throttle_time_ms error_code producer_id producer_epoch 
  throttle_time_ms => INT32
  error_code => INT16
  producer_id => INT64
  producer_epoch => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
producer_idThe current producer id.
producer_epochThe current epoch associated with the producer id.
InitProducerId Response (Version: 2) => throttle_time_ms error_code producer_id producer_epoch _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  producer_id => INT64
  producer_epoch => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
producer_idThe current producer id.
producer_epochThe current epoch associated with the producer id.
_tagged_fieldsThe tagged fields
InitProducerId Response (Version: 3) => throttle_time_ms error_code producer_id producer_epoch _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  producer_id => INT64
  producer_epoch => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
producer_idThe current producer id.
producer_epochThe current epoch associated with the producer id.
_tagged_fieldsThe tagged fields
InitProducerId Response (Version: 4) => throttle_time_ms error_code producer_id producer_epoch _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  producer_id => INT64
  producer_epoch => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
producer_idThe current producer id.
producer_epochThe current epoch associated with the producer id.
_tagged_fieldsThe tagged fields
OffsetForLeaderEpoch API (Key: 23):
Requests:
OffsetForLeaderEpoch Request (Version: 2) => [topics] 
  topics => topic [partitions] 
    topic => STRING
    partitions => partition current_leader_epoch leader_epoch 
      partition => INT32
      current_leader_epoch => INT32
      leader_epoch => INT32

Request header version: 1

FieldDescription
topicsEach topic to get offsets for.
topicThe topic name.
partitionsEach partition to get offsets for.
partitionThe partition index.
current_leader_epochAn epoch used to fence consumers/replicas with old metadata. If the epoch provided by the client is larger than the current epoch known to the broker, then the UNKNOWN_LEADER_EPOCH error code will be returned. If the provided epoch is smaller, then the FENCED_LEADER_EPOCH error code will be returned.
leader_epochThe epoch to look up an offset for.
OffsetForLeaderEpoch Request (Version: 3) => replica_id [topics] 
  replica_id => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => partition current_leader_epoch leader_epoch 
      partition => INT32
      current_leader_epoch => INT32
      leader_epoch => INT32

Request header version: 1

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
topicsEach topic to get offsets for.
topicThe topic name.
partitionsEach partition to get offsets for.
partitionThe partition index.
current_leader_epochAn epoch used to fence consumers/replicas with old metadata. If the epoch provided by the client is larger than the current epoch known to the broker, then the UNKNOWN_LEADER_EPOCH error code will be returned. If the provided epoch is smaller, then the FENCED_LEADER_EPOCH error code will be returned.
leader_epochThe epoch to look up an offset for.
OffsetForLeaderEpoch Request (Version: 4) => replica_id [topics] _tagged_fields 
  replica_id => INT32
  topics => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => partition current_leader_epoch leader_epoch _tagged_fields 
      partition => INT32
      current_leader_epoch => INT32
      leader_epoch => INT32

Request header version: 2

FieldDescription
replica_idThe broker ID of the follower, of -1 if this request is from a consumer.
topicsEach topic to get offsets for.
topicThe topic name.
partitionsEach partition to get offsets for.
partitionThe partition index.
current_leader_epochAn epoch used to fence consumers/replicas with old metadata. If the epoch provided by the client is larger than the current epoch known to the broker, then the UNKNOWN_LEADER_EPOCH error code will be returned. If the provided epoch is smaller, then the FENCED_LEADER_EPOCH error code will be returned.
leader_epochThe epoch to look up an offset for.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
OffsetForLeaderEpoch Response (Version: 2) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => error_code partition leader_epoch end_offset 
      error_code => INT16
      partition => INT32
      leader_epoch => INT32
      end_offset => INT64

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic we fetched offsets for.
topicThe topic name.
partitionsEach partition in the topic we fetched offsets for.
error_codeThe error code 0, or if there was no error.
partitionThe partition index.
leader_epochThe leader epoch of the partition.
end_offsetThe end offset of the epoch.
OffsetForLeaderEpoch Response (Version: 3) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => topic [partitions] 
    topic => STRING
    partitions => error_code partition leader_epoch end_offset 
      error_code => INT16
      partition => INT32
      leader_epoch => INT32
      end_offset => INT64

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsEach topic we fetched offsets for.
topicThe topic name.
partitionsEach partition in the topic we fetched offsets for.
error_codeThe error code 0, or if there was no error.
partitionThe partition index.
leader_epochThe leader epoch of the partition.
end_offsetThe end offset of the epoch.
AddPartitionsToTxn API (Key: 24):
Requests:
AddPartitionsToTxn Request (Version: 0) => v3_and_below_transactional_id v3_and_below_producer_id v3_and_below_producer_epoch [v3_and_below_topics] 
  v3_and_below_transactional_id => STRING
  v3_and_below_producer_id => INT64
  v3_and_below_producer_epoch => INT16
  v3_and_below_topics => name [partitions] 
    name => STRING
    partitions => INT32

Request header version: 1

FieldDescription
v3_and_below_transactional_idThe transactional id corresponding to the transaction.
v3_and_below_producer_idCurrent producer id in use by the transactional id.
v3_and_below_producer_epochCurrent epoch associated with the producer id.
v3_and_below_topicsThe partitions to add to the transaction.
nameThe name of the topic.
partitionsThe partition indexes to add to the transaction.
AddPartitionsToTxn Request (Version: 1) => v3_and_below_transactional_id v3_and_below_producer_id v3_and_below_producer_epoch [v3_and_below_topics] 
  v3_and_below_transactional_id => STRING
  v3_and_below_producer_id => INT64
  v3_and_below_producer_epoch => INT16
  v3_and_below_topics => name [partitions] 
    name => STRING
    partitions => INT32

Request header version: 1

FieldDescription
v3_and_below_transactional_idThe transactional id corresponding to the transaction.
v3_and_below_producer_idCurrent producer id in use by the transactional id.
v3_and_below_producer_epochCurrent epoch associated with the producer id.
v3_and_below_topicsThe partitions to add to the transaction.
nameThe name of the topic.
partitionsThe partition indexes to add to the transaction.
AddPartitionsToTxn Request (Version: 2) => v3_and_below_transactional_id v3_and_below_producer_id v3_and_below_producer_epoch [v3_and_below_topics] 
  v3_and_below_transactional_id => STRING
  v3_and_below_producer_id => INT64
  v3_and_below_producer_epoch => INT16
  v3_and_below_topics => name [partitions] 
    name => STRING
    partitions => INT32

Request header version: 1

FieldDescription
v3_and_below_transactional_idThe transactional id corresponding to the transaction.
v3_and_below_producer_idCurrent producer id in use by the transactional id.
v3_and_below_producer_epochCurrent epoch associated with the producer id.
v3_and_below_topicsThe partitions to add to the transaction.
nameThe name of the topic.
partitionsThe partition indexes to add to the transaction.
AddPartitionsToTxn Request (Version: 3) => v3_and_below_transactional_id v3_and_below_producer_id v3_and_below_producer_epoch [v3_and_below_topics] _tagged_fields 
  v3_and_below_transactional_id => COMPACT_STRING
  v3_and_below_producer_id => INT64
  v3_and_below_producer_epoch => INT16
  v3_and_below_topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => INT32

Request header version: 2

FieldDescription
v3_and_below_transactional_idThe transactional id corresponding to the transaction.
v3_and_below_producer_idCurrent producer id in use by the transactional id.
v3_and_below_producer_epochCurrent epoch associated with the producer id.
v3_and_below_topicsThe partitions to add to the transaction.
nameThe name of the topic.
partitionsThe partition indexes to add to the transaction.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
AddPartitionsToTxn Request (Version: 4) => [transactions] _tagged_fields 
  transactions => transactional_id producer_id producer_epoch verify_only [topics] _tagged_fields 
    transactional_id => COMPACT_STRING
    producer_id => INT64
    producer_epoch => INT16
    verify_only => BOOLEAN
    topics => name [partitions] _tagged_fields 
      name => COMPACT_STRING
      partitions => INT32

Request header version: 2

FieldDescription
transactionsList of transactions to add partitions to.
transactional_idThe transactional id corresponding to the transaction.
producer_idCurrent producer id in use by the transactional id.
producer_epochCurrent epoch associated with the producer id.
verify_onlyBoolean to signify if we want to check if the partition is in the transaction rather than add it.
topicsThe partitions to add to the transaction.
nameThe name of the topic.
partitionsThe partition indexes to add to the transaction.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
AddPartitionsToTxn Request (Version: 5) => [transactions] _tagged_fields 
  transactions => transactional_id producer_id producer_epoch verify_only [topics] _tagged_fields 
    transactional_id => COMPACT_STRING
    producer_id => INT64
    producer_epoch => INT16
    verify_only => BOOLEAN
    topics => name [partitions] _tagged_fields 
      name => COMPACT_STRING
      partitions => INT32

Request header version: 2

FieldDescription
transactionsList of transactions to add partitions to.
transactional_idThe transactional id corresponding to the transaction.
producer_idCurrent producer id in use by the transactional id.
producer_epochCurrent epoch associated with the producer id.
verify_onlyBoolean to signify if we want to check if the partition is in the transaction rather than add it.
topicsThe partitions to add to the transaction.
nameThe name of the topic.
partitionsThe partition indexes to add to the transaction.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
AddPartitionsToTxn Response (Version: 0) => throttle_time_ms [results_by_topic_v3_and_below] 
  throttle_time_ms => INT32
  results_by_topic_v3_and_below => name [results_by_partition] 
    name => STRING
    results_by_partition => partition_index partition_error_code 
      partition_index => INT32
      partition_error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
results_by_topic_v3_and_belowThe results for each topic.
nameThe topic name.
results_by_partitionThe results for each partition.
partition_indexThe partition indexes.
partition_error_codeThe response error code.
AddPartitionsToTxn Response (Version: 1) => throttle_time_ms [results_by_topic_v3_and_below] 
  throttle_time_ms => INT32
  results_by_topic_v3_and_below => name [results_by_partition] 
    name => STRING
    results_by_partition => partition_index partition_error_code 
      partition_index => INT32
      partition_error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
results_by_topic_v3_and_belowThe results for each topic.
nameThe topic name.
results_by_partitionThe results for each partition.
partition_indexThe partition indexes.
partition_error_codeThe response error code.
AddPartitionsToTxn Response (Version: 2) => throttle_time_ms [results_by_topic_v3_and_below] 
  throttle_time_ms => INT32
  results_by_topic_v3_and_below => name [results_by_partition] 
    name => STRING
    results_by_partition => partition_index partition_error_code 
      partition_index => INT32
      partition_error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
results_by_topic_v3_and_belowThe results for each topic.
nameThe topic name.
results_by_partitionThe results for each partition.
partition_indexThe partition indexes.
partition_error_codeThe response error code.
AddPartitionsToTxn Response (Version: 3) => throttle_time_ms [results_by_topic_v3_and_below] _tagged_fields 
  throttle_time_ms => INT32
  results_by_topic_v3_and_below => name [results_by_partition] _tagged_fields 
    name => COMPACT_STRING
    results_by_partition => partition_index partition_error_code _tagged_fields 
      partition_index => INT32
      partition_error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
results_by_topic_v3_and_belowThe results for each topic.
nameThe topic name.
results_by_partitionThe results for each partition.
partition_indexThe partition indexes.
partition_error_codeThe response error code.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
AddPartitionsToTxn Response (Version: 4) => throttle_time_ms error_code [results_by_transaction] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  results_by_transaction => transactional_id [topic_results] _tagged_fields 
    transactional_id => COMPACT_STRING
    topic_results => name [results_by_partition] _tagged_fields 
      name => COMPACT_STRING
      results_by_partition => partition_index partition_error_code _tagged_fields 
        partition_index => INT32
        partition_error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe response top level error code.
results_by_transactionResults categorized by transactional ID.
transactional_idThe transactional id corresponding to the transaction.
topic_resultsThe results for each topic.
nameThe topic name.
results_by_partitionThe results for each partition.
partition_indexThe partition indexes.
partition_error_codeThe response error code.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
AddOffsetsToTxn API (Key: 25):
Requests:
AddOffsetsToTxn Request (Version: 0) => transactional_id producer_id producer_epoch group_id 
  transactional_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  group_id => STRING

Request header version: 1

FieldDescription
transactional_idThe transactional id corresponding to the transaction.
producer_idCurrent producer id in use by the transactional id.
producer_epochCurrent epoch associated with the producer id.
group_idThe unique group identifier.
AddOffsetsToTxn Request (Version: 1) => transactional_id producer_id producer_epoch group_id 
  transactional_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  group_id => STRING

Request header version: 1

FieldDescription
transactional_idThe transactional id corresponding to the transaction.
producer_idCurrent producer id in use by the transactional id.
producer_epochCurrent epoch associated with the producer id.
group_idThe unique group identifier.
AddOffsetsToTxn Request (Version: 2) => transactional_id producer_id producer_epoch group_id 
  transactional_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  group_id => STRING

Request header version: 1

FieldDescription
transactional_idThe transactional id corresponding to the transaction.
producer_idCurrent producer id in use by the transactional id.
producer_epochCurrent epoch associated with the producer id.
group_idThe unique group identifier.
AddOffsetsToTxn Request (Version: 3) => transactional_id producer_id producer_epoch group_id _tagged_fields 
  transactional_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  group_id => COMPACT_STRING

Request header version: 2

FieldDescription
transactional_idThe transactional id corresponding to the transaction.
producer_idCurrent producer id in use by the transactional id.
producer_epochCurrent epoch associated with the producer id.
group_idThe unique group identifier.
_tagged_fieldsThe tagged fields
AddOffsetsToTxn Request (Version: 4) => transactional_id producer_id producer_epoch group_id _tagged_fields 
  transactional_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  group_id => COMPACT_STRING

Request header version: 2

FieldDescription
transactional_idThe transactional id corresponding to the transaction.
producer_idCurrent producer id in use by the transactional id.
producer_epochCurrent epoch associated with the producer id.
group_idThe unique group identifier.
_tagged_fieldsThe tagged fields
Responses:
AddOffsetsToTxn Response (Version: 0) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe response error code, or 0 if there was no error.
AddOffsetsToTxn Response (Version: 1) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe response error code, or 0 if there was no error.
AddOffsetsToTxn Response (Version: 2) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe response error code, or 0 if there was no error.
AddOffsetsToTxn Response (Version: 3) => throttle_time_ms error_code _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe response error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
EndTxn API (Key: 26):
Requests:
EndTxn Request (Version: 0) => transactional_id producer_id producer_epoch committed 
  transactional_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  committed => BOOLEAN

Request header version: 1

FieldDescription
transactional_idThe ID of the transaction to end.
producer_idThe producer ID.
producer_epochThe current epoch associated with the producer.
committedTrue if the transaction was committed, false if it was aborted.
EndTxn Request (Version: 1) => transactional_id producer_id producer_epoch committed 
  transactional_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  committed => BOOLEAN

Request header version: 1

FieldDescription
transactional_idThe ID of the transaction to end.
producer_idThe producer ID.
producer_epochThe current epoch associated with the producer.
committedTrue if the transaction was committed, false if it was aborted.
EndTxn Request (Version: 2) => transactional_id producer_id producer_epoch committed 
  transactional_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  committed => BOOLEAN

Request header version: 1

FieldDescription
transactional_idThe ID of the transaction to end.
producer_idThe producer ID.
producer_epochThe current epoch associated with the producer.
committedTrue if the transaction was committed, false if it was aborted.
EndTxn Request (Version: 3) => transactional_id producer_id producer_epoch committed _tagged_fields 
  transactional_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  committed => BOOLEAN

Request header version: 2

FieldDescription
transactional_idThe ID of the transaction to end.
producer_idThe producer ID.
producer_epochThe current epoch associated with the producer.
committedTrue if the transaction was committed, false if it was aborted.
_tagged_fieldsThe tagged fields
EndTxn Request (Version: 4) => transactional_id producer_id producer_epoch committed _tagged_fields 
  transactional_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  committed => BOOLEAN

Request header version: 2

FieldDescription
transactional_idThe ID of the transaction to end.
producer_idThe producer ID.
producer_epochThe current epoch associated with the producer.
committedTrue if the transaction was committed, false if it was aborted.
_tagged_fieldsThe tagged fields
EndTxn Request (Version: 5) => transactional_id producer_id producer_epoch committed _tagged_fields 
  transactional_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  committed => BOOLEAN

Request header version: 2

FieldDescription
transactional_idThe ID of the transaction to end.
producer_idThe producer ID.
producer_epochThe current epoch associated with the producer.
committedTrue if the transaction was committed, false if it was aborted.
_tagged_fieldsThe tagged fields
Responses:
EndTxn Response (Version: 0) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
EndTxn Response (Version: 1) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
EndTxn Response (Version: 2) => throttle_time_ms error_code 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
EndTxn Response (Version: 3) => throttle_time_ms error_code _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
EndTxn Response (Version: 4) => throttle_time_ms error_code _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
WriteTxnMarkers API (Key: 27):
Requests:
WriteTxnMarkers Request (Version: 1) => [markers] _tagged_fields 
  markers => producer_id producer_epoch transaction_result [topics] coordinator_epoch _tagged_fields 
    producer_id => INT64
    producer_epoch => INT16
    transaction_result => BOOLEAN
    topics => name [partition_indexes] _tagged_fields 
      name => COMPACT_STRING
      partition_indexes => INT32
    coordinator_epoch => INT32

Request header version: 2

FieldDescription
markersThe transaction markers to be written.
producer_idThe current producer ID.
producer_epochThe current epoch associated with the producer ID.
transaction_resultThe result of the transaction to write to the partitions (false = ABORT, true = COMMIT).
topicsEach topic that we want to write transaction marker(s) for.
nameThe topic name.
partition_indexesThe indexes of the partitions to write transaction markers for.
_tagged_fieldsThe tagged fields
coordinator_epochEpoch associated with the transaction state partition hosted by this transaction coordinator.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
TxnOffsetCommit API (Key: 28):
Requests:
TxnOffsetCommit Request (Version: 0) => transactional_id group_id producer_id producer_epoch [topics] 
  transactional_id => STRING
  group_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
transactional_idThe ID of the transaction.
group_idThe ID of the group.
producer_idThe current producer ID in use by the transactional ID.
producer_epochThe current epoch associated with the producer ID.
topicsEach topic that we want to commit offsets for.
nameThe topic name.
partitionsThe partitions inside the topic that we want to commit offsets for.
partition_indexThe index of the partition within the topic.
committed_offsetThe message offset to be committed.
committed_metadataAny associated metadata the client wants to keep.
TxnOffsetCommit Request (Version: 1) => transactional_id group_id producer_id producer_epoch [topics] 
  transactional_id => STRING
  group_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
transactional_idThe ID of the transaction.
group_idThe ID of the group.
producer_idThe current producer ID in use by the transactional ID.
producer_epochThe current epoch associated with the producer ID.
topicsEach topic that we want to commit offsets for.
nameThe topic name.
partitionsThe partitions inside the topic that we want to commit offsets for.
partition_indexThe index of the partition within the topic.
committed_offsetThe message offset to be committed.
committed_metadataAny associated metadata the client wants to keep.
TxnOffsetCommit Request (Version: 2) => transactional_id group_id producer_id producer_epoch [topics] 
  transactional_id => STRING
  group_id => STRING
  producer_id => INT64
  producer_epoch => INT16
  topics => name [partitions] 
    name => STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => NULLABLE_STRING

Request header version: 1

FieldDescription
transactional_idThe ID of the transaction.
group_idThe ID of the group.
producer_idThe current producer ID in use by the transactional ID.
producer_epochThe current epoch associated with the producer ID.
topicsEach topic that we want to commit offsets for.
nameThe topic name.
partitionsThe partitions inside the topic that we want to commit offsets for.
partition_indexThe index of the partition within the topic.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of the last consumed record.
committed_metadataAny associated metadata the client wants to keep.
TxnOffsetCommit Request (Version: 3) => transactional_id group_id producer_id producer_epoch generation_id member_id group_instance_id [topics] _tagged_fields 
  transactional_id => COMPACT_STRING
  group_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  generation_id => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata _tagged_fields 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
transactional_idThe ID of the transaction.
group_idThe ID of the group.
producer_idThe current producer ID in use by the transactional ID.
producer_epochThe current epoch associated with the producer ID.
generation_idThe generation of the consumer.
member_idThe member ID assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
topicsEach topic that we want to commit offsets for.
nameThe topic name.
partitionsThe partitions inside the topic that we want to commit offsets for.
partition_indexThe index of the partition within the topic.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of the last consumed record.
committed_metadataAny associated metadata the client wants to keep.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
TxnOffsetCommit Request (Version: 4) => transactional_id group_id producer_id producer_epoch generation_id member_id group_instance_id [topics] _tagged_fields 
  transactional_id => COMPACT_STRING
  group_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  generation_id => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata _tagged_fields 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
transactional_idThe ID of the transaction.
group_idThe ID of the group.
producer_idThe current producer ID in use by the transactional ID.
producer_epochThe current epoch associated with the producer ID.
generation_idThe generation of the consumer.
member_idThe member ID assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
topicsEach topic that we want to commit offsets for.
nameThe topic name.
partitionsThe partitions inside the topic that we want to commit offsets for.
partition_indexThe index of the partition within the topic.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of the last consumed record.
committed_metadataAny associated metadata the client wants to keep.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
TxnOffsetCommit Request (Version: 5) => transactional_id group_id producer_id producer_epoch generation_id member_id group_instance_id [topics] _tagged_fields 
  transactional_id => COMPACT_STRING
  group_id => COMPACT_STRING
  producer_id => INT64
  producer_epoch => INT16
  generation_id => INT32
  member_id => COMPACT_STRING
  group_instance_id => COMPACT_NULLABLE_STRING
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index committed_offset committed_leader_epoch committed_metadata _tagged_fields 
      partition_index => INT32
      committed_offset => INT64
      committed_leader_epoch => INT32
      committed_metadata => COMPACT_NULLABLE_STRING

Request header version: 2

FieldDescription
transactional_idThe ID of the transaction.
group_idThe ID of the group.
producer_idThe current producer ID in use by the transactional ID.
producer_epochThe current epoch associated with the producer ID.
generation_idThe generation of the consumer.
member_idThe member ID assigned by the group coordinator.
group_instance_idThe unique identifier of the consumer instance provided by end user.
topicsEach topic that we want to commit offsets for.
nameThe topic name.
partitionsThe partitions inside the topic that we want to commit offsets for.
partition_indexThe index of the partition within the topic.
committed_offsetThe message offset to be committed.
committed_leader_epochThe leader epoch of the last consumed record.
committed_metadataAny associated metadata the client wants to keep.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
TxnOffsetCommit Response (Version: 0) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
TxnOffsetCommit Response (Version: 1) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
TxnOffsetCommit Response (Version: 2) => throttle_time_ms [topics] 
  throttle_time_ms => INT32
  topics => name [partitions] 
    name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
TxnOffsetCommit Response (Version: 3) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index error_code _tagged_fields 
      partition_index => INT32
      error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
TxnOffsetCommit Response (Version: 4) => throttle_time_ms [topics] _tagged_fields 
  throttle_time_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index error_code _tagged_fields 
      partition_index => INT32
      error_code => INT16

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
topicsThe responses for each topic.
nameThe topic name.
partitionsThe responses for each partition in the topic.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeAcls API (Key: 29):
Requests:
DescribeAcls Request (Version: 1) => resource_type_filter resource_name_filter pattern_type_filter principal_filter host_filter operation permission_type 
  resource_type_filter => INT8
  resource_name_filter => NULLABLE_STRING
  pattern_type_filter => INT8
  principal_filter => NULLABLE_STRING
  host_filter => NULLABLE_STRING
  operation => INT8
  permission_type => INT8

Request header version: 1

FieldDescription
resource_type_filterThe resource type.
resource_name_filterThe resource name, or null to match any resource name.
pattern_type_filterThe resource pattern to match.
principal_filterThe principal to match, or null to match any principal.
host_filterThe host to match, or null to match any host.
operationThe operation to match.
permission_typeThe permission type to match.
DescribeAcls Request (Version: 2) => resource_type_filter resource_name_filter pattern_type_filter principal_filter host_filter operation permission_type _tagged_fields 
  resource_type_filter => INT8
  resource_name_filter => COMPACT_NULLABLE_STRING
  pattern_type_filter => INT8
  principal_filter => COMPACT_NULLABLE_STRING
  host_filter => COMPACT_NULLABLE_STRING
  operation => INT8
  permission_type => INT8

Request header version: 2

FieldDescription
resource_type_filterThe resource type.
resource_name_filterThe resource name, or null to match any resource name.
pattern_type_filterThe resource pattern to match.
principal_filterThe principal to match, or null to match any principal.
host_filterThe host to match, or null to match any host.
operationThe operation to match.
permission_typeThe permission type to match.
_tagged_fieldsThe tagged fields
DescribeAcls Request (Version: 3) => resource_type_filter resource_name_filter pattern_type_filter principal_filter host_filter operation permission_type _tagged_fields 
  resource_type_filter => INT8
  resource_name_filter => COMPACT_NULLABLE_STRING
  pattern_type_filter => INT8
  principal_filter => COMPACT_NULLABLE_STRING
  host_filter => COMPACT_NULLABLE_STRING
  operation => INT8
  permission_type => INT8

Request header version: 2

FieldDescription
resource_type_filterThe resource type.
resource_name_filterThe resource name, or null to match any resource name.
pattern_type_filterThe resource pattern to match.
principal_filterThe principal to match, or null to match any principal.
host_filterThe host to match, or null to match any host.
operationThe operation to match.
permission_typeThe permission type to match.
_tagged_fieldsThe tagged fields
Responses:
DescribeAcls Response (Version: 1) => throttle_time_ms error_code error_message [resources] 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => NULLABLE_STRING
  resources => resource_type resource_name pattern_type [acls] 
    resource_type => INT8
    resource_name => STRING
    pattern_type => INT8
    acls => principal host operation permission_type 
      principal => STRING
      host => STRING
      operation => INT8
      permission_type => INT8

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
resourcesEach Resource that is referenced in an ACL.
resource_typeThe resource type.
resource_nameThe resource name.
pattern_typeThe resource pattern type.
aclsThe ACLs.
principalThe ACL principal.
hostThe ACL host.
operationThe ACL operation.
permission_typeThe ACL permission type.
DescribeAcls Response (Version: 2) => throttle_time_ms error_code error_message [resources] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => COMPACT_NULLABLE_STRING
  resources => resource_type resource_name pattern_type [acls] _tagged_fields 
    resource_type => INT8
    resource_name => COMPACT_STRING
    pattern_type => INT8
    acls => principal host operation permission_type _tagged_fields 
      principal => COMPACT_STRING
      host => COMPACT_STRING
      operation => INT8
      permission_type => INT8

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
resourcesEach Resource that is referenced in an ACL.
resource_typeThe resource type.
resource_nameThe resource name.
pattern_typeThe resource pattern type.
aclsThe ACLs.
principalThe ACL principal.
hostThe ACL host.
operationThe ACL operation.
permission_typeThe ACL permission type.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
CreateAcls API (Key: 30):
Requests:
CreateAcls Request (Version: 1) => [creations] 
  creations => resource_type resource_name resource_pattern_type principal host operation permission_type 
    resource_type => INT8
    resource_name => STRING
    resource_pattern_type => INT8
    principal => STRING
    host => STRING
    operation => INT8
    permission_type => INT8

Request header version: 1

FieldDescription
creationsThe ACLs that we want to create.
resource_typeThe type of the resource.
resource_nameThe resource name for the ACL.
resource_pattern_typeThe pattern type for the ACL.
principalThe principal for the ACL.
hostThe host for the ACL.
operationThe operation type for the ACL (read, write, etc.).
permission_typeThe permission type for the ACL (allow, deny, etc.).
CreateAcls Request (Version: 2) => [creations] _tagged_fields 
  creations => resource_type resource_name resource_pattern_type principal host operation permission_type _tagged_fields 
    resource_type => INT8
    resource_name => COMPACT_STRING
    resource_pattern_type => INT8
    principal => COMPACT_STRING
    host => COMPACT_STRING
    operation => INT8
    permission_type => INT8

Request header version: 2

FieldDescription
creationsThe ACLs that we want to create.
resource_typeThe type of the resource.
resource_nameThe resource name for the ACL.
resource_pattern_typeThe pattern type for the ACL.
principalThe principal for the ACL.
hostThe host for the ACL.
operationThe operation type for the ACL (read, write, etc.).
permission_typeThe permission type for the ACL (allow, deny, etc.).
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
CreateAcls Request (Version: 3) => [creations] _tagged_fields 
  creations => resource_type resource_name resource_pattern_type principal host operation permission_type _tagged_fields 
    resource_type => INT8
    resource_name => COMPACT_STRING
    resource_pattern_type => INT8
    principal => COMPACT_STRING
    host => COMPACT_STRING
    operation => INT8
    permission_type => INT8

Request header version: 2

FieldDescription
creationsThe ACLs that we want to create.
resource_typeThe type of the resource.
resource_nameThe resource name for the ACL.
resource_pattern_typeThe pattern type for the ACL.
principalThe principal for the ACL.
hostThe host for the ACL.
operationThe operation type for the ACL (read, write, etc.).
permission_typeThe permission type for the ACL (allow, deny, etc.).
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
CreateAcls Response (Version: 1) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => error_code error_message 
    error_code => INT16
    error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe results for each ACL creation.
error_codeThe result error, or zero if there was no error.
error_messageThe result message, or null if there was no error.
CreateAcls Response (Version: 2) => throttle_time_ms [results] _tagged_fields 
  throttle_time_ms => INT32
  results => error_code error_message _tagged_fields 
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe results for each ACL creation.
error_codeThe result error, or zero if there was no error.
error_messageThe result message, or null if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DeleteAcls API (Key: 31):
Requests:
DeleteAcls Request (Version: 1) => [filters] 
  filters => resource_type_filter resource_name_filter pattern_type_filter principal_filter host_filter operation permission_type 
    resource_type_filter => INT8
    resource_name_filter => NULLABLE_STRING
    pattern_type_filter => INT8
    principal_filter => NULLABLE_STRING
    host_filter => NULLABLE_STRING
    operation => INT8
    permission_type => INT8

Request header version: 1

FieldDescription
filtersThe filters to use when deleting ACLs.
resource_type_filterThe resource type.
resource_name_filterThe resource name.
pattern_type_filterThe pattern type.
principal_filterThe principal filter, or null to accept all principals.
host_filterThe host filter, or null to accept all hosts.
operationThe ACL operation.
permission_typeThe permission type.
DeleteAcls Request (Version: 2) => [filters] _tagged_fields 
  filters => resource_type_filter resource_name_filter pattern_type_filter principal_filter host_filter operation permission_type _tagged_fields 
    resource_type_filter => INT8
    resource_name_filter => COMPACT_NULLABLE_STRING
    pattern_type_filter => INT8
    principal_filter => COMPACT_NULLABLE_STRING
    host_filter => COMPACT_NULLABLE_STRING
    operation => INT8
    permission_type => INT8

Request header version: 2

FieldDescription
filtersThe filters to use when deleting ACLs.
resource_type_filterThe resource type.
resource_name_filterThe resource name.
pattern_type_filterThe pattern type.
principal_filterThe principal filter, or null to accept all principals.
host_filterThe host filter, or null to accept all hosts.
operationThe ACL operation.
permission_typeThe permission type.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DeleteAcls Request (Version: 3) => [filters] _tagged_fields 
  filters => resource_type_filter resource_name_filter pattern_type_filter principal_filter host_filter operation permission_type _tagged_fields 
    resource_type_filter => INT8
    resource_name_filter => COMPACT_NULLABLE_STRING
    pattern_type_filter => INT8
    principal_filter => COMPACT_NULLABLE_STRING
    host_filter => COMPACT_NULLABLE_STRING
    operation => INT8
    permission_type => INT8

Request header version: 2

FieldDescription
filtersThe filters to use when deleting ACLs.
resource_type_filterThe resource type.
resource_name_filterThe resource name.
pattern_type_filterThe pattern type.
principal_filterThe principal filter, or null to accept all principals.
host_filterThe host filter, or null to accept all hosts.
operationThe ACL operation.
permission_typeThe permission type.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
DeleteAcls Response (Version: 1) => throttle_time_ms [filter_results] 
  throttle_time_ms => INT32
  filter_results => error_code error_message [matching_acls] 
    error_code => INT16
    error_message => NULLABLE_STRING
    matching_acls => error_code error_message resource_type resource_name pattern_type principal host operation permission_type 
      error_code => INT16
      error_message => NULLABLE_STRING
      resource_type => INT8
      resource_name => STRING
      pattern_type => INT8
      principal => STRING
      host => STRING
      operation => INT8
      permission_type => INT8

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
filter_resultsThe results for each filter.
error_codeThe error code, or 0 if the filter succeeded.
error_messageThe error message, or null if the filter succeeded.
matching_aclsThe ACLs which matched this filter.
error_codeThe deletion error code, or 0 if the deletion succeeded.
error_messageThe deletion error message, or null if the deletion succeeded.
resource_typeThe ACL resource type.
resource_nameThe ACL resource name.
pattern_typeThe ACL resource pattern type.
principalThe ACL principal.
hostThe ACL host.
operationThe ACL operation.
permission_typeThe ACL permission type.
DeleteAcls Response (Version: 2) => throttle_time_ms [filter_results] _tagged_fields 
  throttle_time_ms => INT32
  filter_results => error_code error_message [matching_acls] _tagged_fields 
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING
    matching_acls => error_code error_message resource_type resource_name pattern_type principal host operation permission_type _tagged_fields 
      error_code => INT16
      error_message => COMPACT_NULLABLE_STRING
      resource_type => INT8
      resource_name => COMPACT_STRING
      pattern_type => INT8
      principal => COMPACT_STRING
      host => COMPACT_STRING
      operation => INT8
      permission_type => INT8

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
filter_resultsThe results for each filter.
error_codeThe error code, or 0 if the filter succeeded.
error_messageThe error message, or null if the filter succeeded.
matching_aclsThe ACLs which matched this filter.
error_codeThe deletion error code, or 0 if the deletion succeeded.
error_messageThe deletion error message, or null if the deletion succeeded.
resource_typeThe ACL resource type.
resource_nameThe ACL resource name.
pattern_typeThe ACL resource pattern type.
principalThe ACL principal.
hostThe ACL host.
operationThe ACL operation.
permission_typeThe ACL permission type.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeConfigs API (Key: 32):
Requests:
DescribeConfigs Request (Version: 1) => [resources] include_synonyms 
  resources => resource_type resource_name [configuration_keys] 
    resource_type => INT8
    resource_name => STRING
    configuration_keys => STRING
  include_synonyms => BOOLEAN

Request header version: 1

FieldDescription
resourcesThe resources whose configurations we want to describe.
resource_typeThe resource type.
resource_nameThe resource name.
configuration_keysThe configuration keys to list, or null to list all configuration keys.
include_synonymsTrue if we should include all synonyms.
DescribeConfigs Request (Version: 2) => [resources] include_synonyms 
  resources => resource_type resource_name [configuration_keys] 
    resource_type => INT8
    resource_name => STRING
    configuration_keys => STRING
  include_synonyms => BOOLEAN

Request header version: 1

FieldDescription
resourcesThe resources whose configurations we want to describe.
resource_typeThe resource type.
resource_nameThe resource name.
configuration_keysThe configuration keys to list, or null to list all configuration keys.
include_synonymsTrue if we should include all synonyms.
DescribeConfigs Request (Version: 3) => [resources] include_synonyms include_documentation 
  resources => resource_type resource_name [configuration_keys] 
    resource_type => INT8
    resource_name => STRING
    configuration_keys => STRING
  include_synonyms => BOOLEAN
  include_documentation => BOOLEAN

Request header version: 1

FieldDescription
resourcesThe resources whose configurations we want to describe.
resource_typeThe resource type.
resource_nameThe resource name.
configuration_keysThe configuration keys to list, or null to list all configuration keys.
include_synonymsTrue if we should include all synonyms.
include_documentationTrue if we should include configuration documentation.
DescribeConfigs Request (Version: 4) => [resources] include_synonyms include_documentation _tagged_fields 
  resources => resource_type resource_name [configuration_keys] _tagged_fields 
    resource_type => INT8
    resource_name => COMPACT_STRING
    configuration_keys => COMPACT_STRING
  include_synonyms => BOOLEAN
  include_documentation => BOOLEAN

Request header version: 2

FieldDescription
resourcesThe resources whose configurations we want to describe.
resource_typeThe resource type.
resource_nameThe resource name.
configuration_keysThe configuration keys to list, or null to list all configuration keys.
_tagged_fieldsThe tagged fields
include_synonymsTrue if we should include all synonyms.
include_documentationTrue if we should include configuration documentation.
_tagged_fieldsThe tagged fields
Responses:
DescribeConfigs Response (Version: 1) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => error_code error_message resource_type resource_name [configs] 
    error_code => INT16
    error_message => NULLABLE_STRING
    resource_type => INT8
    resource_name => STRING
    configs => name value read_only config_source is_sensitive [synonyms] 
      name => STRING
      value => NULLABLE_STRING
      read_only => BOOLEAN
      config_source => INT8
      is_sensitive => BOOLEAN
      synonyms => name value source 
        name => STRING
        value => NULLABLE_STRING
        source => INT8

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe results for each resource.
error_codeThe error code, or 0 if we were able to successfully describe the configurations.
error_messageThe error message, or null if we were able to successfully describe the configurations.
resource_typeThe resource type.
resource_nameThe resource name.
configsEach listed configuration.
nameThe configuration name.
valueThe configuration value.
read_onlyTrue if the configuration is read-only.
config_sourceThe configuration source.
is_sensitiveTrue if this configuration is sensitive.
synonymsThe synonyms for this configuration key.
nameThe synonym name.
valueThe synonym value.
sourceThe synonym source.
DescribeConfigs Response (Version: 2) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => error_code error_message resource_type resource_name [configs] 
    error_code => INT16
    error_message => NULLABLE_STRING
    resource_type => INT8
    resource_name => STRING
    configs => name value read_only config_source is_sensitive [synonyms] 
      name => STRING
      value => NULLABLE_STRING
      read_only => BOOLEAN
      config_source => INT8
      is_sensitive => BOOLEAN
      synonyms => name value source 
        name => STRING
        value => NULLABLE_STRING
        source => INT8

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe results for each resource.
error_codeThe error code, or 0 if we were able to successfully describe the configurations.
error_messageThe error message, or null if we were able to successfully describe the configurations.
resource_typeThe resource type.
resource_nameThe resource name.
configsEach listed configuration.
nameThe configuration name.
valueThe configuration value.
read_onlyTrue if the configuration is read-only.
config_sourceThe configuration source.
is_sensitiveTrue if this configuration is sensitive.
synonymsThe synonyms for this configuration key.
nameThe synonym name.
valueThe synonym value.
sourceThe synonym source.
DescribeConfigs Response (Version: 3) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => error_code error_message resource_type resource_name [configs] 
    error_code => INT16
    error_message => NULLABLE_STRING
    resource_type => INT8
    resource_name => STRING
    configs => name value read_only config_source is_sensitive [synonyms] config_type documentation 
      name => STRING
      value => NULLABLE_STRING
      read_only => BOOLEAN
      config_source => INT8
      is_sensitive => BOOLEAN
      synonyms => name value source 
        name => STRING
        value => NULLABLE_STRING
        source => INT8
      config_type => INT8
      documentation => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe results for each resource.
error_codeThe error code, or 0 if we were able to successfully describe the configurations.
error_messageThe error message, or null if we were able to successfully describe the configurations.
resource_typeThe resource type.
resource_nameThe resource name.
configsEach listed configuration.
nameThe configuration name.
valueThe configuration value.
read_onlyTrue if the configuration is read-only.
config_sourceThe configuration source.
is_sensitiveTrue if this configuration is sensitive.
synonymsThe synonyms for this configuration key.
nameThe synonym name.
valueThe synonym value.
sourceThe synonym source.
config_typeThe configuration data type. Type can be one of the following values - BOOLEAN, STRING, INT, SHORT, LONG, DOUBLE, LIST, CLASS, PASSWORD.
documentationThe configuration documentation.
AlterConfigs API (Key: 33):
Requests:
AlterConfigs Request (Version: 0) => [resources] validate_only 
  resources => resource_type resource_name [configs] 
    resource_type => INT8
    resource_name => STRING
    configs => name value 
      name => STRING
      value => NULLABLE_STRING
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
resourcesThe updates for each resource.
resource_typeThe resource type.
resource_nameThe resource name.
configsThe configurations.
nameThe configuration key name.
valueThe value to set for the configuration key.
validate_onlyTrue if we should validate the request, but not change the configurations.
AlterConfigs Request (Version: 1) => [resources] validate_only 
  resources => resource_type resource_name [configs] 
    resource_type => INT8
    resource_name => STRING
    configs => name value 
      name => STRING
      value => NULLABLE_STRING
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
resourcesThe updates for each resource.
resource_typeThe resource type.
resource_nameThe resource name.
configsThe configurations.
nameThe configuration key name.
valueThe value to set for the configuration key.
validate_onlyTrue if we should validate the request, but not change the configurations.
AlterConfigs Request (Version: 2) => [resources] validate_only _tagged_fields 
  resources => resource_type resource_name [configs] _tagged_fields 
    resource_type => INT8
    resource_name => COMPACT_STRING
    configs => name value _tagged_fields 
      name => COMPACT_STRING
      value => COMPACT_NULLABLE_STRING
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
resourcesThe updates for each resource.
resource_typeThe resource type.
resource_nameThe resource name.
configsThe configurations.
nameThe configuration key name.
valueThe value to set for the configuration key.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
validate_onlyTrue if we should validate the request, but not change the configurations.
_tagged_fieldsThe tagged fields
Responses:
AlterConfigs Response (Version: 0) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => error_code error_message resource_type resource_name 
    error_code => INT16
    error_message => NULLABLE_STRING
    resource_type => INT8
    resource_name => STRING

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe responses for each resource.
error_codeThe resource error code.
error_messageThe resource error message, or null if there was no error.
resource_typeThe resource type.
resource_nameThe resource name.
AlterConfigs Response (Version: 1) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => error_code error_message resource_type resource_name 
    error_code => INT16
    error_message => NULLABLE_STRING
    resource_type => INT8
    resource_name => STRING

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe responses for each resource.
error_codeThe resource error code.
error_messageThe resource error message, or null if there was no error.
resource_typeThe resource type.
resource_nameThe resource name.
AlterReplicaLogDirs API (Key: 34):
Requests:
AlterReplicaLogDirs Request (Version: 1) => [dirs] 
  dirs => path [topics] 
    path => STRING
    topics => name [partitions] 
      name => STRING
      partitions => INT32

Request header version: 1

FieldDescription
dirsThe alterations to make for each directory.
pathThe absolute directory path.
topicsThe topics to add to the directory.
nameThe topic name.
partitionsThe partition indexes.
AlterReplicaLogDirs Request (Version: 2) => [dirs] _tagged_fields 
  dirs => path [topics] _tagged_fields 
    path => COMPACT_STRING
    topics => name [partitions] _tagged_fields 
      name => COMPACT_STRING
      partitions => INT32

Request header version: 2

FieldDescription
dirsThe alterations to make for each directory.
pathThe absolute directory path.
topicsThe topics to add to the directory.
nameThe topic name.
partitionsThe partition indexes.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
AlterReplicaLogDirs Response (Version: 1) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => topic_name [partitions] 
    topic_name => STRING
    partitions => partition_index error_code 
      partition_index => INT32
      error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe results for each topic.
topic_nameThe name of the topic.
partitionsThe results for each partition.
partition_indexThe partition index.
error_codeThe error code, or 0 if there was no error.
DescribeLogDirs API (Key: 35):
Requests:
DescribeLogDirs Request (Version: 1) => [topics] 
  topics => topic [partitions] 
    topic => STRING
    partitions => INT32

Request header version: 1

FieldDescription
topicsEach topic that we want to describe log directories for, or null for all topics.
topicThe topic name.
partitionsThe partition indexes.
DescribeLogDirs Request (Version: 2) => [topics] _tagged_fields 
  topics => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => INT32

Request header version: 2

FieldDescription
topicsEach topic that we want to describe log directories for, or null for all topics.
topicThe topic name.
partitionsThe partition indexes.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeLogDirs Request (Version: 3) => [topics] _tagged_fields 
  topics => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => INT32

Request header version: 2

FieldDescription
topicsEach topic that we want to describe log directories for, or null for all topics.
topicThe topic name.
partitionsThe partition indexes.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeLogDirs Request (Version: 4) => [topics] _tagged_fields 
  topics => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => INT32

Request header version: 2

FieldDescription
topicsEach topic that we want to describe log directories for, or null for all topics.
topicThe topic name.
partitionsThe partition indexes.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
DescribeLogDirs Response (Version: 1) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => error_code log_dir [topics] 
    error_code => INT16
    log_dir => STRING
    topics => name [partitions] 
      name => STRING
      partitions => partition_index partition_size offset_lag is_future_key 
        partition_index => INT32
        partition_size => INT64
        offset_lag => INT64
        is_future_key => BOOLEAN

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe log directories.
error_codeThe error code, or 0 if there was no error.
log_dirThe absolute log directory path.
topicsThe topics.
nameThe topic name.
partitionsThe partitions.
partition_indexThe partition index.
partition_sizeThe size of the log segments in this partition in bytes.
offset_lagThe lag of the log's LEO w.r.t. partition's HW (if it is the current log for the partition) or current replica's LEO (if it is the future log for the partition).
is_future_keyTrue if this log is created by AlterReplicaLogDirsRequest and will replace the current log of the replica in the future.
DescribeLogDirs Response (Version: 2) => throttle_time_ms [results] _tagged_fields 
  throttle_time_ms => INT32
  results => error_code log_dir [topics] _tagged_fields 
    error_code => INT16
    log_dir => COMPACT_STRING
    topics => name [partitions] _tagged_fields 
      name => COMPACT_STRING
      partitions => partition_index partition_size offset_lag is_future_key _tagged_fields 
        partition_index => INT32
        partition_size => INT64
        offset_lag => INT64
        is_future_key => BOOLEAN

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe log directories.
error_codeThe error code, or 0 if there was no error.
log_dirThe absolute log directory path.
topicsThe topics.
nameThe topic name.
partitionsThe partitions.
partition_indexThe partition index.
partition_sizeThe size of the log segments in this partition in bytes.
offset_lagThe lag of the log's LEO w.r.t. partition's HW (if it is the current log for the partition) or current replica's LEO (if it is the future log for the partition).
is_future_keyTrue if this log is created by AlterReplicaLogDirsRequest and will replace the current log of the replica in the future.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeLogDirs Response (Version: 3) => throttle_time_ms error_code [results] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  results => error_code log_dir [topics] _tagged_fields 
    error_code => INT16
    log_dir => COMPACT_STRING
    topics => name [partitions] _tagged_fields 
      name => COMPACT_STRING
      partitions => partition_index partition_size offset_lag is_future_key _tagged_fields 
        partition_index => INT32
        partition_size => INT64
        offset_lag => INT64
        is_future_key => BOOLEAN

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
resultsThe log directories.
error_codeThe error code, or 0 if there was no error.
log_dirThe absolute log directory path.
topicsThe topics.
nameThe topic name.
partitionsThe partitions.
partition_indexThe partition index.
partition_sizeThe size of the log segments in this partition in bytes.
offset_lagThe lag of the log's LEO w.r.t. partition's HW (if it is the current log for the partition) or current replica's LEO (if it is the future log for the partition).
is_future_keyTrue if this log is created by AlterReplicaLogDirsRequest and will replace the current log of the replica in the future.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
SaslAuthenticate API (Key: 36):
Requests:
SaslAuthenticate Request (Version: 0) => auth_bytes 
  auth_bytes => BYTES

Request header version: 1

FieldDescription
auth_bytesThe SASL authentication bytes from the client, as defined by the SASL mechanism.
SaslAuthenticate Request (Version: 1) => auth_bytes 
  auth_bytes => BYTES

Request header version: 1

FieldDescription
auth_bytesThe SASL authentication bytes from the client, as defined by the SASL mechanism.
SaslAuthenticate Request (Version: 2) => auth_bytes _tagged_fields 
  auth_bytes => COMPACT_BYTES

Request header version: 2

FieldDescription
auth_bytesThe SASL authentication bytes from the client, as defined by the SASL mechanism.
_tagged_fieldsThe tagged fields
Responses:
SaslAuthenticate Response (Version: 0) => error_code error_message auth_bytes 
  error_code => INT16
  error_message => NULLABLE_STRING
  auth_bytes => BYTES

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
auth_bytesThe SASL authentication bytes from the server, as defined by the SASL mechanism.
SaslAuthenticate Response (Version: 1) => error_code error_message auth_bytes session_lifetime_ms 
  error_code => INT16
  error_message => NULLABLE_STRING
  auth_bytes => BYTES
  session_lifetime_ms => INT64

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
error_messageThe error message, or null if there was no error.
auth_bytesThe SASL authentication bytes from the server, as defined by the SASL mechanism.
session_lifetime_msNumber of milliseconds after which only re-authentication over the existing connection to create a new session can occur.
CreatePartitions API (Key: 37):
Requests:
CreatePartitions Request (Version: 0) => [topics] timeout_ms validate_only 
  topics => name count [assignments] 
    name => STRING
    count => INT32
    assignments => [broker_ids] 
      broker_ids => INT32
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
topicsEach topic that we want to create new partitions inside.
nameThe topic name.
countThe new partition count.
assignmentsThe new partition assignments.
broker_idsThe assigned broker IDs.
timeout_msThe time in ms to wait for the partitions to be created.
validate_onlyIf true, then validate the request, but don't actually increase the number of partitions.
CreatePartitions Request (Version: 1) => [topics] timeout_ms validate_only 
  topics => name count [assignments] 
    name => STRING
    count => INT32
    assignments => [broker_ids] 
      broker_ids => INT32
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
topicsEach topic that we want to create new partitions inside.
nameThe topic name.
countThe new partition count.
assignmentsThe new partition assignments.
broker_idsThe assigned broker IDs.
timeout_msThe time in ms to wait for the partitions to be created.
validate_onlyIf true, then validate the request, but don't actually increase the number of partitions.
CreatePartitions Request (Version: 2) => [topics] timeout_ms validate_only _tagged_fields 
  topics => name count [assignments] _tagged_fields 
    name => COMPACT_STRING
    count => INT32
    assignments => [broker_ids] _tagged_fields 
      broker_ids => INT32
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
topicsEach topic that we want to create new partitions inside.
nameThe topic name.
countThe new partition count.
assignmentsThe new partition assignments.
broker_idsThe assigned broker IDs.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
timeout_msThe time in ms to wait for the partitions to be created.
validate_onlyIf true, then validate the request, but don't actually increase the number of partitions.
_tagged_fieldsThe tagged fields
CreatePartitions Request (Version: 3) => [topics] timeout_ms validate_only _tagged_fields 
  topics => name count [assignments] _tagged_fields 
    name => COMPACT_STRING
    count => INT32
    assignments => [broker_ids] _tagged_fields 
      broker_ids => INT32
  timeout_ms => INT32
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
topicsEach topic that we want to create new partitions inside.
nameThe topic name.
countThe new partition count.
assignmentsThe new partition assignments.
broker_idsThe assigned broker IDs.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
timeout_msThe time in ms to wait for the partitions to be created.
validate_onlyIf true, then validate the request, but don't actually increase the number of partitions.
_tagged_fieldsThe tagged fields
Responses:
CreatePartitions Response (Version: 0) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => name error_code error_message 
    name => STRING
    error_code => INT16
    error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe partition creation results for each topic.
nameThe topic name.
error_codeThe result error, or zero if there was no error.
error_messageThe result message, or null if there was no error.
CreatePartitions Response (Version: 1) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => name error_code error_message 
    name => STRING
    error_code => INT16
    error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe partition creation results for each topic.
nameThe topic name.
error_codeThe result error, or zero if there was no error.
error_messageThe result message, or null if there was no error.
CreatePartitions Response (Version: 2) => throttle_time_ms [results] _tagged_fields 
  throttle_time_ms => INT32
  results => name error_code error_message _tagged_fields 
    name => COMPACT_STRING
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe partition creation results for each topic.
nameThe topic name.
error_codeThe result error, or zero if there was no error.
error_messageThe result message, or null if there was no error.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
CreateDelegationToken API (Key: 38):
Requests:
CreateDelegationToken Request (Version: 1) => [renewers] max_lifetime_ms 
  renewers => principal_type principal_name 
    principal_type => STRING
    principal_name => STRING
  max_lifetime_ms => INT64

Request header version: 1

FieldDescription
renewersA list of those who are allowed to renew this token before it expires.
principal_typeThe type of the Kafka principal.
principal_nameThe name of the Kafka principal.
max_lifetime_msThe maximum lifetime of the token in milliseconds, or -1 to use the server side default.
CreateDelegationToken Request (Version: 2) => [renewers] max_lifetime_ms _tagged_fields 
  renewers => principal_type principal_name _tagged_fields 
    principal_type => COMPACT_STRING
    principal_name => COMPACT_STRING
  max_lifetime_ms => INT64

Request header version: 2

FieldDescription
renewersA list of those who are allowed to renew this token before it expires.
principal_typeThe type of the Kafka principal.
principal_nameThe name of the Kafka principal.
_tagged_fieldsThe tagged fields
max_lifetime_msThe maximum lifetime of the token in milliseconds, or -1 to use the server side default.
_tagged_fieldsThe tagged fields
CreateDelegationToken Request (Version: 3) => owner_principal_type owner_principal_name [renewers] max_lifetime_ms _tagged_fields 
  owner_principal_type => COMPACT_NULLABLE_STRING
  owner_principal_name => COMPACT_NULLABLE_STRING
  renewers => principal_type principal_name _tagged_fields 
    principal_type => COMPACT_STRING
    principal_name => COMPACT_STRING
  max_lifetime_ms => INT64

Request header version: 2

FieldDescription
owner_principal_typeThe principal type of the owner of the token. If it's null it defaults to the token request principal.
owner_principal_nameThe principal name of the owner of the token. If it's null it defaults to the token request principal.
renewersA list of those who are allowed to renew this token before it expires.
principal_typeThe type of the Kafka principal.
principal_nameThe name of the Kafka principal.
_tagged_fieldsThe tagged fields
max_lifetime_msThe maximum lifetime of the token in milliseconds, or -1 to use the server side default.
_tagged_fieldsThe tagged fields
Responses:
CreateDelegationToken Response (Version: 1) => error_code principal_type principal_name issue_timestamp_ms expiry_timestamp_ms max_timestamp_ms token_id hmac throttle_time_ms 
  error_code => INT16
  principal_type => STRING
  principal_name => STRING
  issue_timestamp_ms => INT64
  expiry_timestamp_ms => INT64
  max_timestamp_ms => INT64
  token_id => STRING
  hmac => BYTES
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
error_codeThe top-level error, or zero if there was no error.
principal_typeThe principal type of the token owner.
principal_nameThe name of the token owner.
issue_timestamp_msWhen this token was generated.
expiry_timestamp_msWhen this token expires.
max_timestamp_msThe maximum lifetime of this token.
token_idThe token UUID.
hmacHMAC of the delegation token.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
CreateDelegationToken Response (Version: 2) => error_code principal_type principal_name issue_timestamp_ms expiry_timestamp_ms max_timestamp_ms token_id hmac throttle_time_ms _tagged_fields 
  error_code => INT16
  principal_type => COMPACT_STRING
  principal_name => COMPACT_STRING
  issue_timestamp_ms => INT64
  expiry_timestamp_ms => INT64
  max_timestamp_ms => INT64
  token_id => COMPACT_STRING
  hmac => COMPACT_BYTES
  throttle_time_ms => INT32

Response header version: 1

FieldDescription
error_codeThe top-level error, or zero if there was no error.
principal_typeThe principal type of the token owner.
principal_nameThe name of the token owner.
issue_timestamp_msWhen this token was generated.
expiry_timestamp_msWhen this token expires.
max_timestamp_msThe maximum lifetime of this token.
token_idThe token UUID.
hmacHMAC of the delegation token.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
_tagged_fieldsThe tagged fields
RenewDelegationToken API (Key: 39):
Requests:
RenewDelegationToken Request (Version: 1) => hmac renew_period_ms 
  hmac => BYTES
  renew_period_ms => INT64

Request header version: 1

FieldDescription
hmacThe HMAC of the delegation token to be renewed.
renew_period_msThe renewal time period in milliseconds.
RenewDelegationToken Request (Version: 2) => hmac renew_period_ms _tagged_fields 
  hmac => COMPACT_BYTES
  renew_period_ms => INT64

Request header version: 2

FieldDescription
hmacThe HMAC of the delegation token to be renewed.
renew_period_msThe renewal time period in milliseconds.
_tagged_fieldsThe tagged fields
Responses:
RenewDelegationToken Response (Version: 1) => error_code expiry_timestamp_ms throttle_time_ms 
  error_code => INT16
  expiry_timestamp_ms => INT64
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
expiry_timestamp_msThe timestamp in milliseconds at which this token expires.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
ExpireDelegationToken API (Key: 40):
Requests:
ExpireDelegationToken Request (Version: 1) => hmac expiry_time_period_ms 
  hmac => BYTES
  expiry_time_period_ms => INT64

Request header version: 1

FieldDescription
hmacThe HMAC of the delegation token to be expired.
expiry_time_period_msThe expiry time period in milliseconds.
ExpireDelegationToken Request (Version: 2) => hmac expiry_time_period_ms _tagged_fields 
  hmac => COMPACT_BYTES
  expiry_time_period_ms => INT64

Request header version: 2

FieldDescription
hmacThe HMAC of the delegation token to be expired.
expiry_time_period_msThe expiry time period in milliseconds.
_tagged_fieldsThe tagged fields
Responses:
ExpireDelegationToken Response (Version: 1) => error_code expiry_timestamp_ms throttle_time_ms 
  error_code => INT16
  expiry_timestamp_ms => INT64
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
expiry_timestamp_msThe timestamp in milliseconds at which this token expires.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
DescribeDelegationToken API (Key: 41):
Requests:
DescribeDelegationToken Request (Version: 1) => [owners] 
  owners => principal_type principal_name 
    principal_type => STRING
    principal_name => STRING

Request header version: 1

FieldDescription
ownersEach owner that we want to describe delegation tokens for, or null to describe all tokens.
principal_typeThe owner principal type.
principal_nameThe owner principal name.
DescribeDelegationToken Request (Version: 2) => [owners] _tagged_fields 
  owners => principal_type principal_name _tagged_fields 
    principal_type => COMPACT_STRING
    principal_name => COMPACT_STRING

Request header version: 2

FieldDescription
ownersEach owner that we want to describe delegation tokens for, or null to describe all tokens.
principal_typeThe owner principal type.
principal_nameThe owner principal name.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeDelegationToken Request (Version: 3) => [owners] _tagged_fields 
  owners => principal_type principal_name _tagged_fields 
    principal_type => COMPACT_STRING
    principal_name => COMPACT_STRING

Request header version: 2

FieldDescription
ownersEach owner that we want to describe delegation tokens for, or null to describe all tokens.
principal_typeThe owner principal type.
principal_nameThe owner principal name.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
DescribeDelegationToken Response (Version: 1) => error_code [tokens] throttle_time_ms 
  error_code => INT16
  tokens => principal_type principal_name issue_timestamp expiry_timestamp max_timestamp token_id hmac [renewers] 
    principal_type => STRING
    principal_name => STRING
    issue_timestamp => INT64
    expiry_timestamp => INT64
    max_timestamp => INT64
    token_id => STRING
    hmac => BYTES
    renewers => principal_type principal_name 
      principal_type => STRING
      principal_name => STRING
  throttle_time_ms => INT32

Response header version: 0

FieldDescription
error_codeThe error code, or 0 if there was no error.
tokensThe tokens.
principal_typeThe token principal type.
principal_nameThe token principal name.
issue_timestampThe token issue timestamp in milliseconds.
expiry_timestampThe token expiry timestamp in milliseconds.
max_timestampThe token maximum timestamp length in milliseconds.
token_idThe token ID.
hmacThe token HMAC.
renewersThose who are able to renew this token before it expires.
principal_typeThe renewer principal type.
principal_nameThe renewer principal name.
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
DescribeDelegationToken Response (Version: 2) => error_code [tokens] throttle_time_ms _tagged_fields 
  error_code => INT16
  tokens => principal_type principal_name issue_timestamp expiry_timestamp max_timestamp token_id hmac [renewers] _tagged_fields 
    principal_type => COMPACT_STRING
    principal_name => COMPACT_STRING
    issue_timestamp => INT64
    expiry_timestamp => INT64
    max_timestamp => INT64
    token_id => COMPACT_STRING
    hmac => COMPACT_BYTES
    renewers => principal_type principal_name _tagged_fields 
      principal_type => COMPACT_STRING
      principal_name => COMPACT_STRING
  throttle_time_ms => INT32

Response header version: 1

FieldDescription
error_codeThe error code, or 0 if there was no error.
tokensThe tokens.
principal_typeThe token principal type.
principal_nameThe token principal name.
issue_timestampThe token issue timestamp in milliseconds.
expiry_timestampThe token expiry timestamp in milliseconds.
max_timestampThe token maximum timestamp length in milliseconds.
token_idThe token ID.
hmacThe token HMAC.
renewersThose who are able to renew this token before it expires.
principal_typeThe renewer principal type.
principal_nameThe renewer principal name.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
_tagged_fieldsThe tagged fields
DeleteGroups API (Key: 42):
Requests:
DeleteGroups Request (Version: 0) => [groups_names] 
  groups_names => STRING

Request header version: 1

FieldDescription
groups_namesThe group names to delete.
DeleteGroups Request (Version: 1) => [groups_names] 
  groups_names => STRING

Request header version: 1

FieldDescription
groups_namesThe group names to delete.
DeleteGroups Request (Version: 2) => [groups_names] _tagged_fields 
  groups_names => COMPACT_STRING

Request header version: 2

FieldDescription
groups_namesThe group names to delete.
_tagged_fieldsThe tagged fields
Responses:
DeleteGroups Response (Version: 0) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => group_id error_code 
    group_id => STRING
    error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe deletion results.
group_idThe group id.
error_codeThe deletion error, or 0 if the deletion succeeded.
DeleteGroups Response (Version: 1) => throttle_time_ms [results] 
  throttle_time_ms => INT32
  results => group_id error_code 
    group_id => STRING
    error_code => INT16

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
resultsThe deletion results.
group_idThe group id.
error_codeThe deletion error, or 0 if the deletion succeeded.
ElectLeaders API (Key: 43):
Requests:
ElectLeaders Request (Version: 0) => [topic_partitions] timeout_ms 
  topic_partitions => topic [partitions] 
    topic => STRING
    partitions => INT32
  timeout_ms => INT32

Request header version: 1

FieldDescription
topic_partitionsThe topic partitions to elect leaders.
topicThe name of a topic.
partitionsThe partitions of this topic whose leader should be elected.
timeout_msThe time in ms to wait for the election to complete.
ElectLeaders Request (Version: 1) => election_type [topic_partitions] timeout_ms 
  election_type => INT8
  topic_partitions => topic [partitions] 
    topic => STRING
    partitions => INT32
  timeout_ms => INT32

Request header version: 1

FieldDescription
election_typeType of elections to conduct for the partition. A value of '0' elects the preferred replica. A value of '1' elects the first live replica if there are no in-sync replica.
topic_partitionsThe topic partitions to elect leaders.
topicThe name of a topic.
partitionsThe partitions of this topic whose leader should be elected.
timeout_msThe time in ms to wait for the election to complete.
ElectLeaders Request (Version: 2) => election_type [topic_partitions] timeout_ms _tagged_fields 
  election_type => INT8
  topic_partitions => topic [partitions] _tagged_fields 
    topic => COMPACT_STRING
    partitions => INT32
  timeout_ms => INT32

Request header version: 2

FieldDescription
election_typeType of elections to conduct for the partition. A value of '0' elects the preferred replica. A value of '1' elects the first live replica if there are no in-sync replica.
topic_partitionsThe topic partitions to elect leaders.
topicThe name of a topic.
partitionsThe partitions of this topic whose leader should be elected.
_tagged_fieldsThe tagged fields
timeout_msThe time in ms to wait for the election to complete.
_tagged_fieldsThe tagged fields
Responses:
ElectLeaders Response (Version: 0) => throttle_time_ms [replica_election_results] 
  throttle_time_ms => INT32
  replica_election_results => topic [partition_result] 
    topic => STRING
    partition_result => partition_id error_code error_message 
      partition_id => INT32
      error_code => INT16
      error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
replica_election_resultsThe election results, or an empty array if the requester did not have permission and the request asks for all partitions.
topicThe topic name.
partition_resultThe results for each partition.
partition_idThe partition id.
error_codeThe result error, or zero if there was no error.
error_messageThe result message, or null if there was no error.
ElectLeaders Response (Version: 1) => throttle_time_ms error_code [replica_election_results] 
  throttle_time_ms => INT32
  error_code => INT16
  replica_election_results => topic [partition_result] 
    topic => STRING
    partition_result => partition_id error_code error_message 
      partition_id => INT32
      error_code => INT16
      error_message => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top level response error code.
replica_election_resultsThe election results, or an empty array if the requester did not have permission and the request asks for all partitions.
topicThe topic name.
partition_resultThe results for each partition.
partition_idThe partition id.
error_codeThe result error, or zero if there was no error.
error_messageThe result message, or null if there was no error.
IncrementalAlterConfigs API (Key: 44):
Requests:
IncrementalAlterConfigs Request (Version: 0) => [resources] validate_only 
  resources => resource_type resource_name [configs] 
    resource_type => INT8
    resource_name => STRING
    configs => name config_operation value 
      name => STRING
      config_operation => INT8
      value => NULLABLE_STRING
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
resourcesThe incremental updates for each resource.
resource_typeThe resource type.
resource_nameThe resource name.
configsThe configurations.
nameThe configuration key name.
config_operationThe type (Set, Delete, Append, Subtract) of operation.
valueThe value to set for the configuration key.
validate_onlyTrue if we should validate the request, but not change the configurations.
IncrementalAlterConfigs Request (Version: 1) => [resources] validate_only _tagged_fields 
  resources => resource_type resource_name [configs] _tagged_fields 
    resource_type => INT8
    resource_name => COMPACT_STRING
    configs => name config_operation value _tagged_fields 
      name => COMPACT_STRING
      config_operation => INT8
      value => COMPACT_NULLABLE_STRING
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
resourcesThe incremental updates for each resource.
resource_typeThe resource type.
resource_nameThe resource name.
configsThe configurations.
nameThe configuration key name.
config_operationThe type (Set, Delete, Append, Subtract) of operation.
valueThe value to set for the configuration key.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
validate_onlyTrue if we should validate the request, but not change the configurations.
_tagged_fieldsThe tagged fields
Responses:
IncrementalAlterConfigs Response (Version: 0) => throttle_time_ms [responses] 
  throttle_time_ms => INT32
  responses => error_code error_message resource_type resource_name 
    error_code => INT16
    error_message => NULLABLE_STRING
    resource_type => INT8
    resource_name => STRING

Response header version: 0

FieldDescription
throttle_time_msDuration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
responsesThe responses for each resource.
error_codeThe resource error code.
error_messageThe resource error message, or null if there was no error.
resource_typeThe resource type.
resource_nameThe resource name.
AlterPartitionReassignments API (Key: 45):
Requests:
AlterPartitionReassignments Request (Version: 0) => timeout_ms [topics] _tagged_fields 
  timeout_ms => INT32
  topics => name [partitions] _tagged_fields 
    name => COMPACT_STRING
    partitions => partition_index [replicas] _tagged_fields 
      partition_index => INT32
      replicas => INT32

Request header version: 2

FieldDescription
timeout_msThe time in ms to wait for the request to complete.
topicsThe topics to reassign.
nameThe topic name.
partitionsThe partitions to reassign.
partition_indexThe partition index.
replicasThe replicas to place the partitions on, or null to cancel a pending reassignment for this partition.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
ListPartitionReassignments API (Key: 46):
Requests:
ListPartitionReassignments Request (Version: 0) => timeout_ms [topics] _tagged_fields 
  timeout_ms => INT32
  topics => name [partition_indexes] _tagged_fields 
    name => COMPACT_STRING
    partition_indexes => INT32

Request header version: 2

FieldDescription
timeout_msThe time in ms to wait for the request to complete.
topicsThe topics to list partition reassignments for, or null to list everything.
nameThe topic name.
partition_indexesThe partitions to list partition reassignments for.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
OffsetDelete API (Key: 47):
Requests:
OffsetDelete Request (Version: 0) => group_id [topics] 
  group_id => STRING
  topics => name [partitions] 
    name => STRING
    partitions => partition_index 
      partition_index => INT32

Request header version: 1

FieldDescription
group_idThe unique group identifier.
topicsThe topics to delete offsets for.
nameThe topic name.
partitionsEach partition to delete offsets for.
partition_indexThe partition index.
Responses:
DescribeClientQuotas API (Key: 48):
Requests:
DescribeClientQuotas Request (Version: 0) => [components] strict 
  components => entity_type match_type match 
    entity_type => STRING
    match_type => INT8
    match => NULLABLE_STRING
  strict => BOOLEAN

Request header version: 1

FieldDescription
componentsFilter components to apply to quota entities.
entity_typeThe entity type that the filter component applies to.
match_typeHow to match the entity {0 = exact name, 1 = default name, 2 = any specified name}.
matchThe string to match against, or null if unused for the match type.
strictWhether the match is strict, i.e. should exclude entities with unspecified entity types.
DescribeClientQuotas Request (Version: 1) => [components] strict _tagged_fields 
  components => entity_type match_type match _tagged_fields 
    entity_type => COMPACT_STRING
    match_type => INT8
    match => COMPACT_NULLABLE_STRING
  strict => BOOLEAN

Request header version: 2

FieldDescription
componentsFilter components to apply to quota entities.
entity_typeThe entity type that the filter component applies to.
match_typeHow to match the entity {0 = exact name, 1 = default name, 2 = any specified name}.
matchThe string to match against, or null if unused for the match type.
_tagged_fieldsThe tagged fields
strictWhether the match is strict, i.e. should exclude entities with unspecified entity types.
_tagged_fieldsThe tagged fields
Responses:
DescribeClientQuotas Response (Version: 0) => throttle_time_ms error_code error_message [entries] 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => NULLABLE_STRING
  entries => [entity] [values] 
    entity => entity_type entity_name 
      entity_type => STRING
      entity_name => NULLABLE_STRING
    values => key value 
      key => STRING
      value => FLOAT64

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or `0` if the quota description succeeded.
error_messageThe error message, or `null` if the quota description succeeded.
entriesA result entry.
entityThe quota entity description.
entity_typeThe entity type.
entity_nameThe entity name, or null if the default.
valuesThe quota values for the entity.
keyThe quota configuration key.
valueThe quota configuration value.
AlterClientQuotas API (Key: 49):
Requests:
AlterClientQuotas Request (Version: 0) => [entries] validate_only 
  entries => [entity] [ops] 
    entity => entity_type entity_name 
      entity_type => STRING
      entity_name => NULLABLE_STRING
    ops => key value remove 
      key => STRING
      value => FLOAT64
      remove => BOOLEAN
  validate_only => BOOLEAN

Request header version: 1

FieldDescription
entriesThe quota configuration entries to alter.
entityThe quota entity to alter.
entity_typeThe entity type.
entity_nameThe name of the entity, or null if the default.
opsAn individual quota configuration entry to alter.
keyThe quota configuration key.
valueThe value to set, otherwise ignored if the value is to be removed.
removeWhether the quota configuration value should be removed, otherwise set.
validate_onlyWhether the alteration should be validated, but not performed.
AlterClientQuotas Request (Version: 1) => [entries] validate_only _tagged_fields 
  entries => [entity] [ops] _tagged_fields 
    entity => entity_type entity_name _tagged_fields 
      entity_type => COMPACT_STRING
      entity_name => COMPACT_NULLABLE_STRING
    ops => key value remove _tagged_fields 
      key => COMPACT_STRING
      value => FLOAT64
      remove => BOOLEAN
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
entriesThe quota configuration entries to alter.
entityThe quota entity to alter.
entity_typeThe entity type.
entity_nameThe name of the entity, or null if the default.
_tagged_fieldsThe tagged fields
opsAn individual quota configuration entry to alter.
keyThe quota configuration key.
valueThe value to set, otherwise ignored if the value is to be removed.
removeWhether the quota configuration value should be removed, otherwise set.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
validate_onlyWhether the alteration should be validated, but not performed.
_tagged_fieldsThe tagged fields
Responses:
AlterClientQuotas Response (Version: 0) => throttle_time_ms [entries] 
  throttle_time_ms => INT32
  entries => error_code error_message [entity] 
    error_code => INT16
    error_message => NULLABLE_STRING
    entity => entity_type entity_name 
      entity_type => STRING
      entity_name => NULLABLE_STRING

Response header version: 0

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
entriesThe quota configuration entries to alter.
error_codeThe error code, or `0` if the quota alteration succeeded.
error_messageThe error message, or `null` if the quota alteration succeeded.
entityThe quota entity to alter.
entity_typeThe entity type.
entity_nameThe name of the entity, or null if the default.
DescribeUserScramCredentials API (Key: 50):
Requests:
DescribeUserScramCredentials Request (Version: 0) => [users] _tagged_fields 
  users => name _tagged_fields 
    name => COMPACT_STRING

Request header version: 2

FieldDescription
usersThe users to describe, or null/empty to describe all users.
nameThe user name.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
AlterUserScramCredentials API (Key: 51):
Requests:
AlterUserScramCredentials Request (Version: 0) => [deletions] [upsertions] _tagged_fields 
  deletions => name mechanism _tagged_fields 
    name => COMPACT_STRING
    mechanism => INT8
  upsertions => name mechanism iterations salt salted_password _tagged_fields 
    name => COMPACT_STRING
    mechanism => INT8
    iterations => INT32
    salt => COMPACT_BYTES
    salted_password => COMPACT_BYTES

Request header version: 2

FieldDescription
deletionsThe SCRAM credentials to remove.
nameThe user name.
mechanismThe SCRAM mechanism.
_tagged_fieldsThe tagged fields
upsertionsThe SCRAM credentials to update/insert.
nameThe user name.
mechanismThe SCRAM mechanism.
iterationsThe number of iterations.
saltA random salt generated by the client.
salted_passwordThe salted password.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
DescribeQuorum API (Key: 55):
Requests:
DescribeQuorum Request (Version: 0) => [topics] _tagged_fields 
  topics => topic_name [partitions] _tagged_fields 
    topic_name => COMPACT_STRING
    partitions => partition_index _tagged_fields 
      partition_index => INT32

Request header version: 2

FieldDescription
topicsThe topics to describe.
topic_nameThe topic name.
partitionsThe partitions to describe.
partition_indexThe partition index.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeQuorum Request (Version: 1) => [topics] _tagged_fields 
  topics => topic_name [partitions] _tagged_fields 
    topic_name => COMPACT_STRING
    partitions => partition_index _tagged_fields 
      partition_index => INT32

Request header version: 2

FieldDescription
topicsThe topics to describe.
topic_nameThe topic name.
partitionsThe partitions to describe.
partition_indexThe partition index.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeQuorum Request (Version: 2) => [topics] _tagged_fields 
  topics => topic_name [partitions] _tagged_fields 
    topic_name => COMPACT_STRING
    partitions => partition_index _tagged_fields 
      partition_index => INT32

Request header version: 2

FieldDescription
topicsThe topics to describe.
topic_nameThe topic name.
partitionsThe partitions to describe.
partition_indexThe partition index.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
DescribeQuorum Response (Version: 0) => error_code [topics] _tagged_fields 
  error_code => INT16
  topics => topic_name [partitions] _tagged_fields 
    topic_name => COMPACT_STRING
    partitions => partition_index error_code leader_id leader_epoch high_watermark [current_voters] [observers] _tagged_fields 
      partition_index => INT32
      error_code => INT16
      leader_id => INT32
      leader_epoch => INT32
      high_watermark => INT64
      current_voters => replica_id log_end_offset _tagged_fields 
        replica_id => INT32
        log_end_offset => INT64
      observers => replica_id log_end_offset _tagged_fields 
        replica_id => INT32
        log_end_offset => INT64

Response header version: 1

FieldDescription
error_codeThe top level error code.
topicsThe response from the describe quorum API.
topic_nameThe topic name.
partitionsThe partition data.
partition_indexThe partition index.
error_codeThe partition error code.
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
high_watermarkThe high water mark.
current_votersThe current voters of the partition.
replica_idThe ID of the replica.
log_end_offsetThe last known log end offset of the follower or -1 if it is unknown.
_tagged_fieldsThe tagged fields
observersThe observers of the partition.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeQuorum Response (Version: 1) => error_code [topics] _tagged_fields 
  error_code => INT16
  topics => topic_name [partitions] _tagged_fields 
    topic_name => COMPACT_STRING
    partitions => partition_index error_code leader_id leader_epoch high_watermark [current_voters] [observers] _tagged_fields 
      partition_index => INT32
      error_code => INT16
      leader_id => INT32
      leader_epoch => INT32
      high_watermark => INT64
      current_voters => replica_id log_end_offset last_fetch_timestamp last_caught_up_timestamp _tagged_fields 
        replica_id => INT32
        log_end_offset => INT64
        last_fetch_timestamp => INT64
        last_caught_up_timestamp => INT64
      observers => replica_id log_end_offset last_fetch_timestamp last_caught_up_timestamp _tagged_fields 
        replica_id => INT32
        log_end_offset => INT64
        last_fetch_timestamp => INT64
        last_caught_up_timestamp => INT64

Response header version: 1

FieldDescription
error_codeThe top level error code.
topicsThe response from the describe quorum API.
topic_nameThe topic name.
partitionsThe partition data.
partition_indexThe partition index.
error_codeThe partition error code.
leader_idThe ID of the current leader or -1 if the leader is unknown.
leader_epochThe latest known leader epoch.
high_watermarkThe high water mark.
current_votersThe current voters of the partition.
replica_idThe ID of the replica.
log_end_offsetThe last known log end offset of the follower or -1 if it is unknown.
last_fetch_timestampThe last known leader wall clock time time when a follower fetched from the leader. This is reported as -1 both for the current leader or if it is unknown for a voter.
last_caught_up_timestampThe leader wall clock append time of the offset for which the follower made the most recent fetch request. This is reported as the current time for the leader and -1 if unknown for a voter.
_tagged_fieldsThe tagged fields
observersThe observers of the partition.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
UpdateFeatures API (Key: 57):
Requests:
UpdateFeatures Request (Version: 0) => timeout_ms [feature_updates] _tagged_fields 
  timeout_ms => INT32
  feature_updates => feature max_version_level allow_downgrade _tagged_fields 
    feature => COMPACT_STRING
    max_version_level => INT16
    allow_downgrade => BOOLEAN

Request header version: 2

FieldDescription
timeout_msHow long to wait in milliseconds before timing out the request.
feature_updatesThe list of updates to finalized features.
featureThe name of the finalized feature to be updated.
max_version_levelThe new maximum version level for the finalized feature. A value >= 1 is valid. A value < 1, is special, and can be used to request the deletion of the finalized feature.
allow_downgradeDEPRECATED in version 1 (see DowngradeType). When set to true, the finalized feature version level is allowed to be downgraded/deleted. The downgrade request will fail if the new maximum version level is a value that's not lower than the existing maximum finalized version level.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
UpdateFeatures Request (Version: 1) => timeout_ms [feature_updates] validate_only _tagged_fields 
  timeout_ms => INT32
  feature_updates => feature max_version_level upgrade_type _tagged_fields 
    feature => COMPACT_STRING
    max_version_level => INT16
    upgrade_type => INT8
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
timeout_msHow long to wait in milliseconds before timing out the request.
feature_updatesThe list of updates to finalized features.
featureThe name of the finalized feature to be updated.
max_version_levelThe new maximum version level for the finalized feature. A value >= 1 is valid. A value < 1, is special, and can be used to request the deletion of the finalized feature.
upgrade_typeDetermine which type of upgrade will be performed: 1 will perform an upgrade only (default), 2 is safe downgrades only (lossless), 3 is unsafe downgrades (lossy).
_tagged_fieldsThe tagged fields
validate_onlyTrue if we should validate the request, but not perform the upgrade or downgrade.
_tagged_fieldsThe tagged fields
UpdateFeatures Request (Version: 2) => timeout_ms [feature_updates] validate_only _tagged_fields 
  timeout_ms => INT32
  feature_updates => feature max_version_level upgrade_type _tagged_fields 
    feature => COMPACT_STRING
    max_version_level => INT16
    upgrade_type => INT8
  validate_only => BOOLEAN

Request header version: 2

FieldDescription
timeout_msHow long to wait in milliseconds before timing out the request.
feature_updatesThe list of updates to finalized features.
featureThe name of the finalized feature to be updated.
max_version_levelThe new maximum version level for the finalized feature. A value >= 1 is valid. A value < 1, is special, and can be used to request the deletion of the finalized feature.
upgrade_typeDetermine which type of upgrade will be performed: 1 will perform an upgrade only (default), 2 is safe downgrades only (lossless), 3 is unsafe downgrades (lossy).
_tagged_fieldsThe tagged fields
validate_onlyTrue if we should validate the request, but not perform the upgrade or downgrade.
_tagged_fieldsThe tagged fields
Responses:
UpdateFeatures Response (Version: 0) => throttle_time_ms error_code error_message [results] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => COMPACT_NULLABLE_STRING
  results => feature error_code error_message _tagged_fields 
    feature => COMPACT_STRING
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top-level error code, or `0` if there was no top-level error.
error_messageThe top-level error message, or `null` if there was no top-level error.
resultsResults for each feature update.
featureThe name of the finalized feature.
error_codeThe feature update error code or `0` if the feature update succeeded.
error_messageThe feature update error, or `null` if the feature update succeeded.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
UpdateFeatures Response (Version: 1) => throttle_time_ms error_code error_message [results] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => COMPACT_NULLABLE_STRING
  results => feature error_code error_message _tagged_fields 
    feature => COMPACT_STRING
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top-level error code, or `0` if there was no top-level error.
error_messageThe top-level error message, or `null` if there was no top-level error.
resultsResults for each feature update.
featureThe name of the finalized feature.
error_codeThe feature update error code or `0` if the feature update succeeded.
error_messageThe feature update error, or `null` if the feature update succeeded.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
DescribeCluster API (Key: 60):
Requests:
DescribeCluster Request (Version: 0) => include_cluster_authorized_operations _tagged_fields 
  include_cluster_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
include_cluster_authorized_operationsWhether to include cluster authorized operations.
_tagged_fieldsThe tagged fields
DescribeCluster Request (Version: 1) => include_cluster_authorized_operations endpoint_type _tagged_fields 
  include_cluster_authorized_operations => BOOLEAN
  endpoint_type => INT8

Request header version: 2

FieldDescription
include_cluster_authorized_operationsWhether to include cluster authorized operations.
endpoint_typeThe endpoint type to describe. 1=brokers, 2=controllers.
_tagged_fieldsThe tagged fields
DescribeCluster Request (Version: 2) => include_cluster_authorized_operations endpoint_type include_fenced_brokers _tagged_fields 
  include_cluster_authorized_operations => BOOLEAN
  endpoint_type => INT8
  include_fenced_brokers => BOOLEAN

Request header version: 2

FieldDescription
include_cluster_authorized_operationsWhether to include cluster authorized operations.
endpoint_typeThe endpoint type to describe. 1=brokers, 2=controllers.
include_fenced_brokersWhether to include fenced brokers when listing brokers.
_tagged_fieldsThe tagged fields
Responses:
DescribeCluster Response (Version: 0) => throttle_time_ms error_code error_message cluster_id controller_id [brokers] cluster_authorized_operations _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => COMPACT_NULLABLE_STRING
  cluster_id => COMPACT_STRING
  controller_id => INT32
  brokers => broker_id host port rack _tagged_fields 
    broker_id => INT32
    host => COMPACT_STRING
    port => INT32
    rack => COMPACT_NULLABLE_STRING
  cluster_authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top-level error code, or 0 if there was no error.
error_messageThe top-level error message, or null if there was no error.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
brokersEach broker in the response.
broker_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
cluster_authorized_operations32-bit bitfield to represent authorized operations for this cluster.
_tagged_fieldsThe tagged fields
DescribeCluster Response (Version: 1) => throttle_time_ms error_code error_message endpoint_type cluster_id controller_id [brokers] cluster_authorized_operations _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => COMPACT_NULLABLE_STRING
  endpoint_type => INT8
  cluster_id => COMPACT_STRING
  controller_id => INT32
  brokers => broker_id host port rack _tagged_fields 
    broker_id => INT32
    host => COMPACT_STRING
    port => INT32
    rack => COMPACT_NULLABLE_STRING
  cluster_authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top-level error code, or 0 if there was no error.
error_messageThe top-level error message, or null if there was no error.
endpoint_typeThe endpoint type that was described. 1=brokers, 2=controllers.
cluster_idThe cluster ID that responding broker belongs to.
controller_idThe ID of the controller broker.
brokersEach broker in the response.
broker_idThe broker ID.
hostThe broker hostname.
portThe broker port.
rackThe rack of the broker, or null if it has not been assigned to a rack.
_tagged_fieldsThe tagged fields
cluster_authorized_operations32-bit bitfield to represent authorized operations for this cluster.
_tagged_fieldsThe tagged fields
DescribeProducers API (Key: 61):
Requests:
DescribeProducers Request (Version: 0) => [topics] _tagged_fields 
  topics => name [partition_indexes] _tagged_fields 
    name => COMPACT_STRING
    partition_indexes => INT32

Request header version: 2

FieldDescription
topicsThe topics to list producers for.
nameThe topic name.
partition_indexesThe indexes of the partitions to list producers for.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
UnregisterBroker API (Key: 64):
Requests:
UnregisterBroker Request (Version: 0) => broker_id _tagged_fields 
  broker_id => INT32

Request header version: 2

FieldDescription
broker_idThe broker ID to unregister.
_tagged_fieldsThe tagged fields
Responses:
DescribeTransactions API (Key: 65):
Requests:
DescribeTransactions Request (Version: 0) => [transactional_ids] _tagged_fields 
  transactional_ids => COMPACT_STRING

Request header version: 2

FieldDescription
transactional_idsArray of transactionalIds to include in describe results. If empty, then no results will be returned.
_tagged_fieldsThe tagged fields
Responses:
ListTransactions API (Key: 66):
Requests:
ListTransactions Request (Version: 0) => [state_filters] [producer_id_filters] _tagged_fields 
  state_filters => COMPACT_STRING
  producer_id_filters => INT64

Request header version: 2

FieldDescription
state_filtersThe transaction states to filter by: if empty, all transactions are returned; if non-empty, then only transactions matching one of the filtered states will be returned.
producer_id_filtersThe producerIds to filter by: if empty, all transactions will be returned; if non-empty, only transactions which match one of the filtered producerIds will be returned.
_tagged_fieldsThe tagged fields
ListTransactions Request (Version: 1) => [state_filters] [producer_id_filters] duration_filter _tagged_fields 
  state_filters => COMPACT_STRING
  producer_id_filters => INT64
  duration_filter => INT64

Request header version: 2

FieldDescription
state_filtersThe transaction states to filter by: if empty, all transactions are returned; if non-empty, then only transactions matching one of the filtered states will be returned.
producer_id_filtersThe producerIds to filter by: if empty, all transactions will be returned; if non-empty, only transactions which match one of the filtered producerIds will be returned.
duration_filterDuration (in millis) to filter by: if < 0, all transactions will be returned; otherwise, only transactions running longer than this duration will be returned.
_tagged_fieldsThe tagged fields
Responses:
ListTransactions Response (Version: 0) => throttle_time_ms error_code [unknown_state_filters] [transaction_states] _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  unknown_state_filters => COMPACT_STRING
  transaction_states => transactional_id producer_id transaction_state _tagged_fields 
    transactional_id => COMPACT_STRING
    producer_id => INT64
    transaction_state => COMPACT_STRING

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe error code, or 0 if there was no error.
unknown_state_filtersSet of state filters provided in the request which were unknown to the transaction coordinator.
transaction_statesThe current state of the transaction for the transactional id.
transactional_idThe transactional id.
producer_idThe producer id.
transaction_stateThe current transaction state of the producer.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ConsumerGroupHeartbeat API (Key: 68):
Requests:
ConsumerGroupHeartbeat Request (Version: 0) => group_id member_id member_epoch instance_id rack_id rebalance_timeout_ms [subscribed_topic_names] server_assignor [topic_partitions] _tagged_fields 
  group_id => COMPACT_STRING
  member_id => COMPACT_STRING
  member_epoch => INT32
  instance_id => COMPACT_NULLABLE_STRING
  rack_id => COMPACT_NULLABLE_STRING
  rebalance_timeout_ms => INT32
  subscribed_topic_names => COMPACT_STRING
  server_assignor => COMPACT_NULLABLE_STRING
  topic_partitions => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32

Request header version: 2

FieldDescription
group_idThe group identifier.
member_idThe member id generated by the consumer. The member id must be kept during the entire lifetime of the consumer process.
member_epochThe current member epoch; 0 to join the group; -1 to leave the group; -2 to indicate that the static member will rejoin.
instance_idnull if not provided or if it didn't change since the last heartbeat; the instance Id otherwise.
rack_idnull if not provided or if it didn't change since the last heartbeat; the rack ID of consumer otherwise.
rebalance_timeout_ms-1 if it didn't change since the last heartbeat; the maximum time in milliseconds that the coordinator will wait on the member to revoke its partitions otherwise.
subscribed_topic_namesnull if it didn't change since the last heartbeat; the subscribed topic names otherwise.
server_assignornull if not used or if it didn't change since the last heartbeat; the server side assignor to use otherwise.
topic_partitionsnull if it didn't change since the last heartbeat; the partitions owned by the member.
topic_idThe topic ID.
partitionsThe partitions.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ConsumerGroupHeartbeat Request (Version: 1) => group_id member_id member_epoch instance_id rack_id rebalance_timeout_ms [subscribed_topic_names] subscribed_topic_regex server_assignor [topic_partitions] _tagged_fields 
  group_id => COMPACT_STRING
  member_id => COMPACT_STRING
  member_epoch => INT32
  instance_id => COMPACT_NULLABLE_STRING
  rack_id => COMPACT_NULLABLE_STRING
  rebalance_timeout_ms => INT32
  subscribed_topic_names => COMPACT_STRING
  subscribed_topic_regex => COMPACT_NULLABLE_STRING
  server_assignor => COMPACT_NULLABLE_STRING
  topic_partitions => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32

Request header version: 2

FieldDescription
group_idThe group identifier.
member_idThe member id generated by the consumer. The member id must be kept during the entire lifetime of the consumer process.
member_epochThe current member epoch; 0 to join the group; -1 to leave the group; -2 to indicate that the static member will rejoin.
instance_idnull if not provided or if it didn't change since the last heartbeat; the instance Id otherwise.
rack_idnull if not provided or if it didn't change since the last heartbeat; the rack ID of consumer otherwise.
rebalance_timeout_ms-1 if it didn't change since the last heartbeat; the maximum time in milliseconds that the coordinator will wait on the member to revoke its partitions otherwise.
subscribed_topic_namesnull if it didn't change since the last heartbeat; the subscribed topic names otherwise.
subscribed_topic_regexnull if it didn't change since the last heartbeat; the subscribed topic regex otherwise.
server_assignornull if not used or if it didn't change since the last heartbeat; the server side assignor to use otherwise.
topic_partitionsnull if it didn't change since the last heartbeat; the partitions owned by the member.
topic_idThe topic ID.
partitionsThe partitions.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
ConsumerGroupHeartbeat Response (Version: 0) => throttle_time_ms error_code error_message member_id member_epoch heartbeat_interval_ms assignment _tagged_fields 
  throttle_time_ms => INT32
  error_code => INT16
  error_message => COMPACT_NULLABLE_STRING
  member_id => COMPACT_NULLABLE_STRING
  member_epoch => INT32
  heartbeat_interval_ms => INT32
  assignment => [topic_partitions] _tagged_fields 
    topic_partitions => topic_id [partitions] _tagged_fields 
      topic_id => UUID
      partitions => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
error_codeThe top-level error code, or 0 if there was no error.
error_messageThe top-level error message, or null if there was no error.
member_idThe member id is generated by the consumer starting from version 1, while in version 0, it can be provided by users or generated by the group coordinator.
member_epochThe member epoch.
heartbeat_interval_msThe heartbeat interval in milliseconds.
assignmentnull if not provided; the assignment otherwise.
topic_partitionsThe partitions assigned to the member that can be used immediately.
topic_idThe topic ID.
partitionsThe partitions.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
ConsumerGroupDescribe API (Key: 69):
Requests:
ConsumerGroupDescribe Request (Version: 0) => [group_ids] include_authorized_operations _tagged_fields 
  group_ids => COMPACT_STRING
  include_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
group_idsThe ids of the groups to describe.
include_authorized_operationsWhether to include authorized operations.
_tagged_fieldsThe tagged fields
ConsumerGroupDescribe Request (Version: 1) => [group_ids] include_authorized_operations _tagged_fields 
  group_ids => COMPACT_STRING
  include_authorized_operations => BOOLEAN

Request header version: 2

FieldDescription
group_idsThe ids of the groups to describe.
include_authorized_operationsWhether to include authorized operations.
_tagged_fieldsThe tagged fields
Responses:
ConsumerGroupDescribe Response (Version: 0) => throttle_time_ms [groups] _tagged_fields 
  throttle_time_ms => INT32
  groups => error_code error_message group_id group_state group_epoch assignment_epoch assignor_name [members] authorized_operations _tagged_fields 
    error_code => INT16
    error_message => COMPACT_NULLABLE_STRING
    group_id => COMPACT_STRING
    group_state => COMPACT_STRING
    group_epoch => INT32
    assignment_epoch => INT32
    assignor_name => COMPACT_STRING
    members => member_id instance_id rack_id member_epoch client_id client_host [subscribed_topic_names] subscribed_topic_regex assignment target_assignment _tagged_fields 
      member_id => COMPACT_STRING
      instance_id => COMPACT_NULLABLE_STRING
      rack_id => COMPACT_NULLABLE_STRING
      member_epoch => INT32
      client_id => COMPACT_STRING
      client_host => COMPACT_STRING
      subscribed_topic_names => COMPACT_STRING
      subscribed_topic_regex => COMPACT_NULLABLE_STRING
      assignment => [topic_partitions] _tagged_fields 
        topic_partitions => topic_id topic_name [partitions] _tagged_fields 
          topic_id => UUID
          topic_name => COMPACT_STRING
          partitions => INT32
      target_assignment => [topic_partitions] _tagged_fields 
        topic_partitions => topic_id topic_name [partitions] _tagged_fields 
          topic_id => UUID
          topic_name => COMPACT_STRING
          partitions => INT32
    authorized_operations => INT32

Response header version: 1

FieldDescription
throttle_time_msThe duration in milliseconds for which the request was throttled due to a quota violation, or zero if the request did not violate any quota.
groupsEach described group.
error_codeThe describe error, or 0 if there was no error.
error_messageThe top-level error message, or null if there was no error.
group_idThe group ID string.
group_stateThe group state string, or the empty string.
group_epochThe group epoch.
assignment_epochThe assignment epoch.
assignor_nameThe selected assignor.
membersThe members.
member_idThe member ID.
instance_idThe member instance ID.
rack_idThe member rack ID.
member_epochThe current member epoch.
client_idThe client ID.
client_hostThe client host.
subscribed_topic_namesThe subscribed topic names.
subscribed_topic_regexthe subscribed topic regex otherwise or null of not provided.
assignmentThe current assignment.
topic_partitionsThe assigned topic-partitions to the member.
topic_idThe topic ID.
topic_nameThe topic name.
partitionsThe partitions.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
target_assignmentThe target assignment.
_tagged_fieldsThe tagged fields
authorized_operations32-bit bitfield to represent authorized operations for this group.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
GetTelemetrySubscriptions API (Key: 71):
Requests:
GetTelemetrySubscriptions Request (Version: 0) => client_instance_id _tagged_fields 
  client_instance_id => UUID

Request header version: 2

FieldDescription
client_instance_idUnique id for this client instance, must be set to 0 on the first request.
_tagged_fieldsThe tagged fields
Responses:
PushTelemetry API (Key: 72):
Requests:
PushTelemetry Request (Version: 0) => client_instance_id subscription_id terminating compression_type metrics _tagged_fields 
  client_instance_id => UUID
  subscription_id => INT32
  terminating => BOOLEAN
  compression_type => INT8
  metrics => COMPACT_BYTES

Request header version: 2

FieldDescription
client_instance_idUnique id for this client instance.
subscription_idUnique identifier for the current subscription.
terminatingClient is terminating the connection.
compression_typeCompression codec used to compress the metrics.
metricsMetrics encoded in OpenTelemetry MetricsData v1 protobuf format.
_tagged_fieldsThe tagged fields
Responses:
ListClientMetricsResources API (Key: 74):
Requests:
ListClientMetricsResources Request (Version: 0) => _tagged_fields 

Request header version: 2

FieldDescription
_tagged_fieldsThe tagged fields
Responses:
DescribeTopicPartitions API (Key: 75):
Requests:
DescribeTopicPartitions Request (Version: 0) => [topics] response_partition_limit cursor _tagged_fields 
  topics => name _tagged_fields 
    name => COMPACT_STRING
  response_partition_limit => INT32
  cursor => topic_name partition_index _tagged_fields 
    topic_name => COMPACT_STRING
    partition_index => INT32

Request header version: 2

FieldDescription
topicsThe topics to fetch details for.
nameThe topic name.
_tagged_fieldsThe tagged fields
response_partition_limitThe maximum number of partitions included in the response.
cursorThe first topic and partition index to fetch details for.
topic_nameThe name for the first topic to process.
partition_indexThe partition index to start with.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
ShareGroupHeartbeat API (Key: 76):
Requests:
ShareGroupHeartbeat Request (Version: 0) => group_id member_id member_epoch rack_id [subscribed_topic_names] _tagged_fields 
  group_id => COMPACT_STRING
  member_id => COMPACT_STRING
  member_epoch => INT32
  rack_id => COMPACT_NULLABLE_STRING
  subscribed_topic_names => COMPACT_STRING

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
member_idThe member id.
member_epochThe current member epoch; 0 to join the group; -1 to leave the group.
rack_idnull if not provided or if it didn't change since the last heartbeat; the rack ID of consumer otherwise.
subscribed_topic_namesnull if it didn't change since the last heartbeat; the subscribed topic names otherwise.
_tagged_fieldsThe tagged fields
Responses:
ShareGroupDescribe API (Key: 77):
Requests:
ShareGroupDescribe Request (Version: 0) => [group_ids] include_authorized_operations _tagged_fields 
  group_ids => COMPACT_STRING
  include_authorized_operations => BOOLEAN

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idsThe ids of the groups to describe.
include_authorized_operationsWhether to include authorized operations.
_tagged_fieldsThe tagged fields
Responses:
ShareFetch API (Key: 78):
Requests:
ShareFetch Request (Version: 0) => group_id member_id share_session_epoch max_wait_ms min_bytes max_bytes [topics] [forgotten_topics_data] _tagged_fields 
  group_id => COMPACT_NULLABLE_STRING
  member_id => COMPACT_NULLABLE_STRING
  share_session_epoch => INT32
  max_wait_ms => INT32
  min_bytes => INT32
  max_bytes => INT32
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition_index partition_max_bytes [acknowledgement_batches] _tagged_fields 
      partition_index => INT32
      partition_max_bytes => INT32
      acknowledgement_batches => first_offset last_offset [acknowledge_types] _tagged_fields 
        first_offset => INT64
        last_offset => INT64
        acknowledge_types => INT8
  forgotten_topics_data => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => INT32

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
member_idThe member ID.
share_session_epochThe current share session epoch: 0 to open a share session; -1 to close it; otherwise increments for consecutive requests.
max_wait_msThe maximum time in milliseconds to wait for the response.
min_bytesThe minimum bytes to accumulate in the response.
max_bytesThe maximum bytes to fetch. See KIP-74 for cases where this limit may not be honored.
topicsThe topics to fetch.
topic_idThe unique topic ID.
partitionsThe partitions to fetch.
partition_indexThe partition index.
partition_max_bytesThe maximum bytes to fetch from this partition. 0 when only acknowledgement with no fetching is required. See KIP-74 for cases where this limit may not be honored.
acknowledgement_batchesRecord batches to acknowledge.
first_offsetFirst offset of batch of records to acknowledge.
last_offsetLast offset (inclusive) of batch of records to acknowledge.
acknowledge_typesArray of acknowledge types - 0:Gap,1:Accept,2:Release,3:Reject.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
forgotten_topics_dataThe partitions to remove from this share session.
topic_idThe unique topic ID.
partitionsThe partitions indexes to forget.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
ShareAcknowledge API (Key: 79):
Requests:
ShareAcknowledge Request (Version: 0) => group_id member_id share_session_epoch [topics] _tagged_fields 
  group_id => COMPACT_NULLABLE_STRING
  member_id => COMPACT_NULLABLE_STRING
  share_session_epoch => INT32
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition_index [acknowledgement_batches] _tagged_fields 
      partition_index => INT32
      acknowledgement_batches => first_offset last_offset [acknowledge_types] _tagged_fields 
        first_offset => INT64
        last_offset => INT64
        acknowledge_types => INT8

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
member_idThe member ID.
share_session_epochThe current share session epoch: 0 to open a share session; -1 to close it; otherwise increments for consecutive requests.
topicsThe topics containing records to acknowledge.
topic_idThe unique topic ID.
partitionsThe partitions containing records to acknowledge.
partition_indexThe partition index.
acknowledgement_batchesRecord batches to acknowledge.
first_offsetFirst offset of batch of records to acknowledge.
last_offsetLast offset (inclusive) of batch of records to acknowledge.
acknowledge_typesArray of acknowledge types - 0:Gap,1:Accept,2:Release,3:Reject.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
AddRaftVoter API (Key: 80):
Requests:
AddRaftVoter Request (Version: 0) => cluster_id timeout_ms voter_id voter_directory_id [listeners] _tagged_fields 
  cluster_id => COMPACT_NULLABLE_STRING
  timeout_ms => INT32
  voter_id => INT32
  voter_directory_id => UUID
  listeners => name host port _tagged_fields 
    name => COMPACT_STRING
    host => COMPACT_STRING
    port => UINT16

Request header version: 2

FieldDescription
cluster_idThe cluster id.
timeout_msThe maximum time to wait for the request to complete before returning.
voter_idThe replica id of the voter getting added to the topic partition.
voter_directory_idThe directory id of the voter getting added to the topic partition.
listenersThe endpoints that can be used to communicate with the voter.
nameThe name of the endpoint.
hostThe hostname.
portThe port.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
RemoveRaftVoter API (Key: 81):
Requests:
RemoveRaftVoter Request (Version: 0) => cluster_id voter_id voter_directory_id _tagged_fields 
  cluster_id => COMPACT_NULLABLE_STRING
  voter_id => INT32
  voter_directory_id => UUID

Request header version: 2

FieldDescription
cluster_idThe cluster id of the request.
voter_idThe replica id of the voter getting removed from the topic partition.
voter_directory_idThe directory id of the voter getting removed from the topic partition.
_tagged_fieldsThe tagged fields
Responses:
InitializeShareGroupState API (Key: 83):
Requests:
InitializeShareGroupState Request (Version: 0) => group_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition state_epoch start_offset _tagged_fields 
      partition => INT32
      state_epoch => INT32
      start_offset => INT64

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
topicsThe data for the topics.
topic_idThe topic identifier.
partitionsThe data for the partitions.
partitionThe partition index.
state_epochThe state epoch for this share-partition.
start_offsetThe share-partition start offset, or -1 if the start offset is not being initialized.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
ReadShareGroupState API (Key: 84):
Requests:
ReadShareGroupState Request (Version: 0) => group_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition leader_epoch _tagged_fields 
      partition => INT32
      leader_epoch => INT32

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
topicsThe data for the topics.
topic_idThe topic identifier.
partitionsThe data for the partitions.
partitionThe partition index.
leader_epochThe leader epoch of the share-partition.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
WriteShareGroupState API (Key: 85):
Requests:
WriteShareGroupState Request (Version: 0) => group_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition state_epoch leader_epoch start_offset [state_batches] _tagged_fields 
      partition => INT32
      state_epoch => INT32
      leader_epoch => INT32
      start_offset => INT64
      state_batches => first_offset last_offset delivery_state delivery_count _tagged_fields 
        first_offset => INT64
        last_offset => INT64
        delivery_state => INT8
        delivery_count => INT16

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
topicsThe data for the topics.
topic_idThe topic identifier.
partitionsThe data for the partitions.
partitionThe partition index.
state_epochThe state epoch for this share-partition.
leader_epochThe leader epoch of the share-partition.
start_offsetThe share-partition start offset, or -1 if the start offset is not being written.
state_batchesThe state batches for the share-partition.
first_offsetThe base offset of this state batch.
last_offsetThe last offset of this state batch.
delivery_stateThe state - 0:Available,2:Acked,4:Archived.
delivery_countThe delivery count.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
DeleteShareGroupState API (Key: 86):
Requests:
DeleteShareGroupState Request (Version: 0) => group_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition _tagged_fields 
      partition => INT32

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
topicsThe data for the topics.
topic_idThe topic identifier.
partitionsThe data for the partitions.
partitionThe partition index.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:
ReadShareGroupStateSummary API (Key: 87):
Requests:
ReadShareGroupStateSummary Request (Version: 0) => group_id [topics] _tagged_fields 
  group_id => COMPACT_STRING
  topics => topic_id [partitions] _tagged_fields 
    topic_id => UUID
    partitions => partition leader_epoch _tagged_fields 
      partition => INT32
      leader_epoch => INT32

This version of the request is unstable.

Request header version: 2

FieldDescription
group_idThe group identifier.
topicsThe data for the topics.
topic_idThe topic identifier.
partitionsThe data for the partitions.
partitionThe partition index.
leader_epochThe leader epoch of the share-partition.
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
_tagged_fieldsThe tagged fields
Responses:

Some Common Philosophical Questions

Some people have asked why we don’t use HTTP. There are a number of reasons, the best is that client implementors can make use of some of the more advanced TCP features–the ability to multiplex requests, the ability to simultaneously poll many connections, etc. We have also found HTTP libraries in many languages to be surprisingly shabby.

Others have asked if maybe we shouldn’t support many different protocols. Prior experience with this was that it makes it very hard to add and test new features if they have to be ported across many protocol implementations. Our feeling is that most users don’t really see multiple protocols as a feature, they just want a good reliable client in the language of their choice.

Another question is why we don’t adopt XMPP, STOMP, AMQP or an existing protocol. The answer to this varies by protocol, but in general the problem is that the protocol does determine large parts of the implementation and we couldn’t do what we are doing if we didn’t have control over the protocol. Our belief is that it is possible to do better than existing messaging systems have in providing a truly distributed messaging system, and to do this we need to build something that works differently.

A final question is why we don’t use a system like Protocol Buffers or Thrift to define our request messages. These packages excel at helping you to managing lots and lots of serialized messages. However we have only a few messages. Support across languages is somewhat spotty (depending on the package). Finally the mapping between binary log format and wire protocol is something we manage somewhat carefully and this would not be possible with these systems. Finally we prefer the style of versioning APIs explicitly and checking this to inferring new values as nulls as it allows more nuanced control of compatibility.

5 - Implementation

5.1 - Network Layer

Network Layer

Network Layer

The network layer is a fairly straight-forward NIO server, and will not be described in great detail. The sendfile implementation is done by giving the TransferableRecords interface a writeTo method. This allows the file-backed message set to use the more efficient transferTo implementation instead of an in-process buffered write. The threading model is a single acceptor thread and N processor threads which handle a fixed number of connections each. This design has been pretty thoroughly tested elsewhere and found to be simple to implement and fast. The protocol is kept quite simple to allow for future implementation of clients in other languages.

5.2 - Messages

Messages

Messages

Messages consist of a variable-length header, a variable-length opaque key byte array and a variable-length opaque value byte array. The format of the header is described in the following section. Leaving the key and value opaque is the right decision: there is a great deal of progress being made on serialization libraries right now, and any particular choice is unlikely to be right for all uses. Needless to say a particular application using Kafka would likely mandate a particular serialization type as part of its usage. The RecordBatch interface is simply an iterator over messages with specialized methods for bulk reading and writing to an NIO Channel.

5.3 - Message Format

Message Format

Message Format

Messages (aka Records) are always written in batches. The technical term for a batch of messages is a record batch, and a record batch contains one or more records. In the degenerate case, we could have a record batch containing a single record. Record batches and records have their own headers. The format of each is described below.

Record Batch

The following is the on-disk format of a RecordBatch.

baseOffset: int64
batchLength: int32
partitionLeaderEpoch: int32
magic: int8 (current magic value is 2)
crc: uint32
attributes: int16
    bit 0~2:
        0: no compression
        1: gzip
        2: snappy
        3: lz4
        4: zstd
    bit 3: timestampType
    bit 4: isTransactional (0 means not transactional)
    bit 5: isControlBatch (0 means not a control batch)
    bit 6: hasDeleteHorizonMs (0 means baseTimestamp is not set as the delete horizon for compaction)
    bit 7~15: unused
lastOffsetDelta: int32
baseTimestamp: int64
maxTimestamp: int64
producerId: int64
producerEpoch: int16
baseSequence: int32
recordsCount: int32
records: [Record]

Note that when compression is enabled, the compressed record data is serialized directly following the count of the number of records.

The CRC covers the data from the attributes to the end of the batch (i.e. all the bytes that follow the CRC). It is located after the magic byte, which means that clients must parse the magic byte before deciding how to interpret the bytes between the batch length and the magic byte. The partition leader epoch field is not included in the CRC computation to avoid the need to recompute the CRC when this field is assigned for every batch that is received by the broker. The CRC-32C (Castagnoli) polynomial is used for the computation.

On compaction, we preserve the first and last offset/sequence numbers from the original batch when the log is cleaned. This is required in order to be able to restore the producer’s state when the log is reloaded. If we did not retain the last sequence number, for example, then after a partition leader failure, the producer might see an OutOfSequence error. The base sequence number must be preserved for duplicate checking (the broker checks incoming Produce requests for duplicates by verifying that the first and last sequence numbers of the incoming batch match the last from that producer). As a result, it is possible to have empty batches in the log when all the records in the batch are cleaned but batch is still retained in order to preserve a producer’s last sequence number. One oddity here is that the baseTimestamp field is not preserved during compaction, so it will change if the first record in the batch is compacted away.

Compaction may also modify the baseTimestamp if the record batch contains records with a null payload or aborted transaction markers. The baseTimestamp will be set to the timestamp of when those records should be deleted with the delete horizon attribute bit also set.

Control Batches

A control batch contains a single record called the control record. Control records should not be passed on to applications. Instead, they are used by consumers to filter out aborted transactional messages.

The key of a control record conforms to the following schema:

version: int16 (current version is 0)
type: int16 (0 indicates an abort marker, 1 indicates a commit)

The schema for the value of a control record is dependent on the type. The value is opaque to clients.

Record

The on-disk format of each record is delineated below.

length: varint
attributes: int8
    bit 0~7: unused
timestampDelta: varlong
offsetDelta: varint
keyLength: varint
key: byte[]
valueLength: varint
value: byte[]
headersCount: varint
Headers => [Header]

Record Header

headerKeyLength: varint
headerKey: String
headerValueLength: varint
Value: byte[]

We use the same varint encoding as Protobuf. More information on the latter can be found here. The count of headers in a record is also encoded as a varint.

Old Message Format

Prior to Kafka 0.11, messages were transferred and stored in message sets. See Old Message Format for more details.

5.4 - Log

Log

Log

A log for a topic named “my-topic” with two partitions consists of two directories (namely my-topic-0 and my-topic-1) populated with data files containing the messages for that topic. The format of the log files is a sequence of “log entries”; each log entry is a 4 byte integer N storing the message length which is followed by the N message bytes. Each message is uniquely identified by a 64-bit integer offset giving the byte position of the start of this message in the stream of all messages ever sent to that topic on that partition. The on-disk format of each message is given below. Each log file is named with the offset of the first message it contains. So the first file created will be 00000000000000000000.log, and each additional file will have an integer name roughly S bytes from the previous file where S is the max log file size given in the configuration.

The exact binary format for records is versioned and maintained as a standard interface so record batches can be transferred between producer, broker, and client without recopying or conversion when desirable. The previous section included details about the on-disk format of records.

The use of the message offset as the message id is unusual. Our original idea was to use a GUID generated by the producer, and maintain a mapping from GUID to offset on each broker. But since a consumer must maintain an ID for each server, the global uniqueness of the GUID provides no value. Furthermore, the complexity of maintaining the mapping from a random id to an offset requires a heavy weight index structure which must be synchronized with disk, essentially requiring a full persistent random-access data structure. Thus to simplify the lookup structure we decided to use a simple per-partition atomic counter which could be coupled with the partition id and node id to uniquely identify a message; this makes the lookup structure simpler, though multiple seeks per consumer request are still likely. However once we settled on a counter, the jump to directly using the offset seemed natural–both after all are monotonically increasing integers unique to a partition. Since the offset is hidden from the consumer API this decision is ultimately an implementation detail and we went with the more efficient approach.

Writes

The log allows serial appends which always go to the last file. This file is rolled over to a fresh file when it reaches a configurable size (say 1GB). The log takes two configuration parameters: M , which gives the number of messages to write before forcing the OS to flush the file to disk, and S , which gives a number of seconds after which a flush is forced. This gives a durability guarantee of losing at most M messages or S seconds of data in the event of a system crash.

Reads

Reads are done by giving the 64-bit logical offset of a message and an S -byte max chunk size. This will return an iterator over the messages contained in the S -byte buffer. S is intended to be larger than any single message, but in the event of an abnormally large message, the read can be retried multiple times, each time doubling the buffer size, until the message is read successfully. A maximum message and buffer size can be specified to make the server reject messages larger than some size, and to give a bound to the client on the maximum it needs to ever read to get a complete message. It is likely that the read buffer ends with a partial message, this is easily detected by the size delimiting.

The actual process of reading from an offset requires first locating the log segment file in which the data is stored, calculating the file-specific offset from the global offset value, and then reading from that file offset. The search is done as a simple binary search variation against an in-memory range maintained for each file.

The log provides the capability of getting the most recently written message to allow clients to start subscribing as of “right now”. This is also useful in the case the consumer fails to consume its data within its SLA-specified number of days. In this case when the client attempts to consume a non-existent offset it is given an OutOfRangeException and can either reset itself or fail as appropriate to the use case.

The following is the format of the results sent to the consumer.

MessageSetSend (fetch result)

total length     : 4 bytes
error code       : 2 bytes
message 1        : x bytes
...
message n        : x bytes


MultiMessageSetSend (multiFetch result)

total length       : 4 bytes
error code         : 2 bytes
messageSetSend 1
...
messageSetSend n

Deletes

Data is deleted one log segment at a time. The log manager applies two metrics to identify segments which are eligible for deletion: time and size. For time-based policies, the record timestamps are considered, with the largest timestamp in a segment file (order of records is not relevant) defining the retention time for the entire segment. Size-based retention is disabled by default. When enabled the log manager keeps deleting the oldest segment file until the overall size of the partition is within the configured limit again. If both policies are enabled at the same time, a segment that is eligible for deletion due to either policy will be deleted. To avoid locking reads while still allowing deletes that modify the segment list we use a copy-on-write style segment list implementation that provides consistent views to allow a binary search to proceed on an immutable static snapshot view of the log segments while deletes are progressing.

Guarantees

The log provides a configuration parameter M which controls the maximum number of messages that are written before forcing a flush to disk. On startup a log recovery process is run that iterates over all messages in the newest log segment and verifies that each message entry is valid. A message entry is valid if the sum of its size and offset are less than the length of the file AND the CRC32 of the message payload matches the CRC stored with the message. In the event corruption is detected the log is truncated to the last valid offset.

Note that two kinds of corruption must be handled: truncation in which an unwritten block is lost due to a crash, and corruption in which a nonsense block is ADDED to the file. The reason for this is that in general the OS makes no guarantee of the write order between the file inode and the actual block data so in addition to losing written data the file can gain nonsense data if the inode is updated with a new size but a crash occurs before the block containing that data is written. The CRC detects this corner case, and prevents it from corrupting the log (though the unwritten messages are, of course, lost).

5.5 - Distribution

Distribution

Distribution

Consumer Offset Tracking

Kafka consumer tracks the maximum offset it has consumed in each partition and has the capability to commit offsets so that it can resume from those offsets in the event of a restart. Kafka provides the option to store all the offsets for a given consumer group in a designated broker (for that group) called the group coordinator. i.e., any consumer instance in that consumer group should send its offset commits and fetches to that group coordinator (broker). Consumer groups are assigned to coordinators based on their group names. A consumer can look up its coordinator by issuing a FindCoordinatorRequest to any Kafka broker and reading the FindCoordinatorResponse which will contain the coordinator details. The consumer can then proceed to commit or fetch offsets from the coordinator broker. In case the coordinator moves, the consumer will need to rediscover the coordinator. Offset commits can be done automatically or manually by consumer instance.

When the group coordinator receives an OffsetCommitRequest, it appends the request to a special compacted Kafka topic named __consumer_offsets. The broker sends a successful offset commit response to the consumer only after all the replicas of the offsets topic receive the offsets. In case the offsets fail to replicate within a configurable timeout, the offset commit will fail and the consumer may retry the commit after backing off. The brokers periodically compact the offsets topic since it only needs to maintain the most recent offset commit per partition. The coordinator also caches the offsets in an in-memory table in order to serve offset fetches quickly.

When the coordinator receives an offset fetch request, it simply returns the last committed offset vector from the offsets cache. In case coordinator was just started or if it just became the coordinator for a new set of consumer groups (by becoming a leader for a partition of the offsets topic), it may need to load the offsets topic partition into the cache. In this case, the offset fetch will fail with an CoordinatorLoadInProgressException and the consumer may retry the OffsetFetchRequest after backing off.

6 - Operations

6.1 - Basic Kafka Operations

Basic Kafka Operations

Basic Kafka Operations

This section will review the most common operations you will perform on your Kafka cluster. All of the tools reviewed in this section are available under the bin/ directory of the Kafka distribution and each tool will print details on all possible commandline options if it is run with no arguments.

Adding and removing topics

You have the option of either adding topics manually or having them be created automatically when data is first published to a non-existent topic. If topics are auto-created then you may want to tune the default topic configurations used for auto-created topics.

Topics are added and modified using the topic tool:

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my_topic_name \
    --partitions 20 --replication-factor 3 --config x=y

The replication factor controls how many servers will replicate each message that is written. If you have a replication factor of 3 then up to 2 servers can fail before you will lose access to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently bounce machines without interrupting data consumption.

The partition count controls how many logs the topic will be sharded into. There are several impacts of the partition count. First each partition must fit entirely on a single server. So if you have 20 partitions the full data set (and read and write load) will be handled by no more than 20 servers (not counting replicas). Finally the partition count impacts the maximum parallelism of your consumers. This is discussed in greater detail in the concepts section.

Each sharded partition log is placed into its own folder under the Kafka log directory. The name of such folders consists of the topic name, appended by a dash (-) and the partition id. Since a typical folder name can not be over 255 characters long, there will be a limitation on the length of topic names. We assume the number of partitions will not ever be above 100,000. Therefore, topic names cannot be longer than 249 characters. This leaves just enough room in the folder name for a dash and a potentially 5 digit long partition id.

The configurations added on the command line override the default settings the server has for things like the length of time data should be retained. The complete set of per-topic configurations is documented here.

Modifying topics

You can change the configuration or partitioning of a topic using the same topic tool.

To add partitions you can do

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --alter --topic my_topic_name \
    --partitions 40

Be aware that one use case for partitions is to semantically partition data, and adding partitions doesn’t change the partitioning of existing data so this may disturb consumers if they rely on that partition. That is if data is partitioned by hash(key) % number_of_partitions then this partitioning will potentially be shuffled by adding partitions but Kafka will not attempt to automatically redistribute data in any way.

To add configs:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my_topic_name --alter --add-config x=y

To remove a config:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --entity-name my_topic_name --alter --delete-config x

And finally deleting a topic:

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic my_topic_name

Kafka does not currently support reducing the number of partitions for a topic.

Instructions for changing the replication factor of a topic can be found here.

Graceful shutdown

The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the latter cases Kafka supports a more graceful mechanism for stopping a server than just killing it. When a server is stopped gracefully it has two optimizations it will take advantage of:

  1. It will sync all its logs to disk to avoid needing to do any log recovery when it restarts (i.e. validating the checksum for all messages in the tail of the log). Log recovery takes time so this speeds up intentional restarts.
  2. It will migrate any partitions the server is the leader for to other replicas prior to shutting down. This will make the leadership transfer faster and minimize the time each partition is unavailable to a few milliseconds. Syncing the logs will happen automatically whenever the server is stopped other than by a hard kill, but the controlled leadership migration requires using a special setting:
controlled.shutdown.enable=true

Note that controlled shutdown will only succeed if all the partitions hosted on the broker have replicas (i.e. the replication factor is greater than 1 and at least one of these replicas is alive). This is generally what you want since shutting down the last replica would make that topic partition unavailable.

Balancing leadership

Whenever a broker stops or crashes, leadership for that broker’s partitions transfers to other replicas. When the broker is restarted it will only be a follower for all its partitions, meaning it will not be used for client reads and writes.

To avoid this imbalance, Kafka has a notion of preferred replicas. If the list of replicas for a partition is 1,5,9 then node 1 is preferred as the leader to either node 5 or 9 because it is earlier in the replica list. By default the Kafka cluster will try to restore leadership to the preferred replicas. This behaviour is configured with:

auto.leader.rebalance.enable=true

You can also set this to false, but you will then need to manually restore leadership to the restored replicas by running the command:

$ bin/kafka-leader-election.sh --bootstrap-server localhost:9092 --election-type preferred --all-topic-partitions

Balancing Replicas Across Racks

The rack awareness feature spreads replicas of the same partition across different racks. This extends the guarantees Kafka provides for broker-failure to cover rack-failure, limiting the risk of data loss should all the brokers on a rack fail at once. The feature can also be applied to other broker groupings such as availability zones in EC2.

You can specify that a broker belongs to a particular rack by adding a property to the broker config:

broker.rack=my-rack-id

When a topic is created, modified or replicas are redistributed, the rack constraint will be honoured, ensuring replicas span as many racks as they can (a partition will span min(#racks, replication-factor) different racks).

The algorithm used to assign replicas to brokers ensures that the number of leaders per broker will be constant, regardless of how brokers are distributed across racks. This ensures balanced throughput.

However if racks are assigned different numbers of brokers, the assignment of replicas will not be even. Racks with fewer brokers will get more replicas, meaning they will use more storage and put more resources into replication. Hence it is sensible to configure an equal number of brokers per rack.

Mirroring data between clusters & Geo-replication

Kafka administrators can define data flows that cross the boundaries of individual Kafka clusters, data centers, or geographical regions. Please refer to the section on Geo-Replication for further information.

Checking consumer position

Sometimes it’s useful to see the position of your consumers. We have a tool that will show the position of all consumers in a consumer group as well as how far behind the end of the log they are. To run this tool on a consumer group named my-group consuming a topic named my-topic would look like this:

$ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG        CONSUMER-ID                                       HOST                           CLIENT-ID
my-topic                       0          2               4               2          consumer-1-029af89c-873c-4751-a720-cefd41a669d6   /127.0.0.1                     consumer-1
my-topic                       1          2               3               1          consumer-1-029af89c-873c-4751-a720-cefd41a669d6   /127.0.0.1                     consumer-1
my-topic                       2          2               3               1          consumer-2-42c1abd4-e3b2-425d-a8bb-e1ea49b29bb2   /127.0.0.1                     consumer-2

Managing Consumer Groups

With the ConsumerGroupCommand tool, we can list, describe, or delete the consumer groups. The consumer group can be deleted manually, or automatically when the last committed offset for that group expires. Manual deletion works only if the group does not have any active members. For example, to list all consumer groups across all topics:

$ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
test-consumer-group

To view offsets, as mentioned earlier, we “describe” the consumer group like this:

$ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                    HOST            CLIENT-ID
topic3          0          241019          395308          154289          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
topic2          1          520678          803288          282610          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
topic3          1          241018          398817          157799          consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1      consumer2
topic1          0          854144          855809          1665            consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1
topic2          0          460537          803290          342753          consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1      consumer1
topic3          2          243655          398812          155157          consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1      consumer4

Note that if the consumer group uses the consumer protocol, the admin client needs DESCRIBE access to all the topics used in the group (topics the members are subscribed to). In contrast, the classic protocol does not require all topics DESCRIBE authorization. There are a number of additional “describe” options that can be used to provide more detailed information about a consumer group:

  • --members: This option provides the list of all active members in the consumer group.

    $ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members
    

    CONSUMER-ID HOST CLIENT-ID #PARTITIONS consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1 consumer1 2 consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1 consumer4 1 consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1 consumer2 3 consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1 consumer3 0

  • --members –verbose: On top of the information reported by the “–members” options above, this option also provides the partitions assigned to each member.

    $ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --members --verbose
    

    CONSUMER-ID HOST CLIENT-ID #PARTITIONS ASSIGNMENT consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1 consumer1 2 topic1(0), topic2(0) consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1 consumer4 1 topic3(2) consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1 consumer2 3 topic2(1), topic3(0,1) consumer3-ecea43e4-1f01-479f-8349-f9130b75d8ee /127.0.0.1 consumer3 0 -

  • --offsets: This is the default describe option and provides the same output as the “–describe” option.

  • --state: This option provides useful group-level information.

    $ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group --state
    

    COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:9092 (0) range Stable 4

To manually delete one or multiple consumer groups, the “–delete” option can be used:

$ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --delete --group my-group --group my-other-group
Deletion of requested consumer groups ('my-group', 'my-other-group') was successful.

To reset offsets of a consumer group, “–reset-offsets” option can be used. This option supports one consumer group at the time. It requires defining following scopes: –all-topics or –topic. One scope must be selected, unless you use ‘–from-file’ scenario. Also, first make sure that the consumer instances are inactive. See KIP-122 for more details.

It has 3 execution options:

  • (default) to display which offsets to reset.
  • --execute : to execute –reset-offsets process.
  • --export : to export the results to a CSV format.

--reset-offsets also has the following scenarios to choose from (at least one scenario must be selected):

  • --to-datetime <String: datetime> : Reset offsets to offsets from datetime. Format: ‘YYYY-MM-DDTHH:mm:SS.sss’
  • --to-earliest : Reset offsets to earliest offset.
  • --to-latest : Reset offsets to latest offset.
  • --shift-by <Long: number-of-offsets> : Reset offsets shifting current offset by ’n’, where ’n’ can be positive or negative.
  • --from-file : Reset offsets to values defined in CSV file.
  • --to-current : Resets offsets to current offset.
  • --by-duration <String: duration> : Reset offsets to offset by duration from current timestamp. Format: ‘PnDTnHnMnS’
  • --to-offset : Reset offsets to a specific offset.

Please note, that out of range offsets will be adjusted to available offset end. For example, if offset end is at 10 and offset shift request is of 15, then, offset at 10 will actually be selected.

For example, to reset offsets of a consumer group to the latest offset:

$ bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --reset-offsets --group consumergroup1 --topic topic1 --to-latest
TOPIC                          PARTITION  NEW-OFFSET
topic1                         0          0

Expanding your cluster

Adding servers to a Kafka cluster is easy, just assign them a unique broker id and start up Kafka on your new servers. However these new servers will not automatically be assigned any data partitions, so unless partitions are moved to them they won’t be doing any work until new topics are created. So usually when you add machines to your cluster you will want to migrate some existing data to these machines.

The process of migrating data is manually initiated but fully automated. Under the covers what happens is that Kafka will add the new server as a follower of the partition it is migrating and allow it to fully replicate the existing data in that partition. When the new server has fully replicated the contents of this partition and joined the in-sync replica one of the existing replicas will delete their partition’s data.

The partition reassignment tool can be used to move partitions across brokers. An ideal partition distribution would ensure even data load and partition sizes across all brokers. The partition reassignment tool does not have the capability to automatically study the data distribution in a Kafka cluster and move partitions around to attain an even load distribution. As such, the admin has to figure out which topics or partitions should be moved around.

The partition reassignment tool can run in 3 mutually exclusive modes:

  • --generate: In this mode, given a list of topics and a list of brokers, the tool generates a candidate reassignment to move all partitions of the specified topics to the new brokers. This option merely provides a convenient way to generate a partition reassignment plan given a list of topics and target brokers.
  • --execute: In this mode, the tool kicks off the reassignment of partitions based on the user provided reassignment plan. (using the –reassignment-json-file option). This can either be a custom reassignment plan hand crafted by the admin or provided by using the –generate option
  • --verify: In this mode, the tool verifies the status of the reassignment for all partitions listed during the last –execute. The status can be either of successfully completed, failed or in progress

Automatically migrating data to new machines

The partition reassignment tool can be used to move some topics off of the current set of brokers to the newly added brokers. This is typically useful while expanding an existing cluster since it is easier to move entire topics to the new set of brokers, than moving one partition at a time. When used to do this, the user should provide a list of topics that should be moved to the new set of brokers and a target list of new brokers. The tool then evenly distributes all partitions for the given list of topics across the new set of brokers. During this move, the replication factor of the topic is kept constant. Effectively the replicas for all partitions for the input list of topics are moved from the old set of brokers to the newly added brokers.

For instance, the following example will move all partitions for topics foo1,foo2 to the new set of brokers 5,6. At the end of this move, all partitions for topics foo1 and foo2 will only exist on brokers 5,6.

Since the tool accepts the input list of topics as a json file, you first need to identify the topics you want to move and create the json file as follows:

$ cat topics-to-move.json
{
  "topics": [
    { "topic": "foo1" },
    { "topic": "foo2" }
  ],
  "version": 1
}

Once the json file is ready, use the partition reassignment tool to generate a candidate assignment:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file topics-to-move.json --broker-list "5,6" --generate
Current partition replica assignment
{"version":1,
 "partitions":[{"topic":"foo1","partition":0,"replicas":[2,1],"log_dirs":["any"]},
               {"topic":"foo1","partition":1,"replicas":[1,3],"log_dirs":["any"]},
               {"topic":"foo1","partition":2,"replicas":[3,4],"log_dirs":["any"]},
               {"topic":"foo2","partition":0,"replicas":[4,2],"log_dirs":["any"]},
               {"topic":"foo2","partition":1,"replicas":[2,1],"log_dirs":["any"]},
               {"topic":"foo2","partition":2,"replicas":[1,3],"log_dirs":["any"]}]
}

Proposed partition reassignment configuration
{"version":1,
 "partitions":[{"topic":"foo1","partition":0,"replicas":[6,5],"log_dirs":["any"]},
               {"topic":"foo1","partition":1,"replicas":[5,6],"log_dirs":["any"]},
               {"topic":"foo1","partition":2,"replicas":[6,5],"log_dirs":["any"]},
               {"topic":"foo2","partition":0,"replicas":[5,6],"log_dirs":["any"]},
               {"topic":"foo2","partition":1,"replicas":[6,5],"log_dirs":["any"]},
               {"topic":"foo2","partition":2,"replicas":[5,6],"log_dirs":["any"]}]
}

The tool generates a candidate assignment that will move all partitions from topics foo1,foo2 to brokers 5,6. Note, however, that at this point, the partition movement has not started, it merely tells you the current assignment and the proposed new assignment. The current assignment should be saved in case you want to rollback to it. The new assignment should be saved in a json file (e.g. expand-cluster-reassignment.json) to be input to the tool with the –execute option as follows:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --execute
Current partition replica assignment

{"version":1,
 "partitions":[{"topic":"foo1","partition":0,"replicas":[2,1],"log_dirs":["any"]},
               {"topic":"foo1","partition":1,"replicas":[1,3],"log_dirs":["any"]},
               {"topic":"foo1","partition":2,"replicas":[3,4],"log_dirs":["any"]},
               {"topic":"foo2","partition":0,"replicas":[4,2],"log_dirs":["any"]},
               {"topic":"foo2","partition":1,"replicas":[2,1],"log_dirs":["any"]},
               {"topic":"foo2","partition":2,"replicas":[1,3],"log_dirs":["any"]}]
}

Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for foo1-0,foo1-1,foo1-2,foo2-0,foo2-1,foo2-2

Finally, the –verify option can be used with the tool to check the status of the partition reassignment. Note that the same expand-cluster-reassignment.json (used with the –execute option) should be used with the –verify option:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file expand-cluster-reassignment.json --verify
Status of partition reassignment:
Reassignment of partition [foo1,0] is completed
Reassignment of partition [foo1,1] is still in progress
Reassignment of partition [foo1,2] is still in progress
Reassignment of partition [foo2,0] is completed
Reassignment of partition [foo2,1] is completed
Reassignment of partition [foo2,2] is completed

Custom partition assignment and migration

The partition reassignment tool can also be used to selectively move replicas of a partition to a specific set of brokers. When used in this manner, it is assumed that the user knows the reassignment plan and does not require the tool to generate a candidate reassignment, effectively skipping the –generate step and moving straight to the –execute step

For instance, the following example moves partition 0 of topic foo1 to brokers 5,6 and partition 1 of topic foo2 to brokers 2,3:

The first step is to hand craft the custom reassignment plan in a json file:

$ cat custom-reassignment.json
{"version":1,"partitions":[{"topic":"foo1","partition":0,"replicas":[5,6]},{"topic":"foo2","partition":1,"replicas":[2,3]}]}

Then, use the json file with the –execute option to start the reassignment process:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --execute
Current partition replica assignment

{"version":1,
 "partitions":[{"topic":"foo1","partition":0,"replicas":[1,2],"log_dirs":["any"]},
               {"topic":"foo2","partition":1,"replicas":[3,4],"log_dirs":["any"]}]
}

Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignments for foo1-0,foo2-1

The –verify option can be used with the tool to check the status of the partition reassignment. Note that the same custom-reassignment.json (used with the –execute option) should be used with the –verify option:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file custom-reassignment.json --verify
Status of partition reassignment:
Reassignment of partition [foo1,0] is completed
Reassignment of partition [foo2,1] is completed

Decommissioning brokers

The partition reassignment tool does not have the ability to automatically generate a reassignment plan for decommissioning brokers yet. As such, the admin has to come up with a reassignment plan to move the replica for all partitions hosted on the broker to be decommissioned, to the rest of the brokers. This can be relatively tedious as the reassignment needs to ensure that all the replicas are not moved from the decommissioned broker to only one other broker. To make this process effortless, we plan to add tooling support for decommissioning brokers in the future.

Increasing replication factor

Increasing the replication factor of an existing partition is easy. Just specify the extra replicas in the custom reassignment json file and use it with the –execute option to increase the replication factor of the specified partitions.

For instance, the following example increases the replication factor of partition 0 of topic foo from 1 to 3. Before increasing the replication factor, the partition’s only replica existed on broker 5. As part of increasing the replication factor, we will add more replicas on brokers 6 and 7.

The first step is to hand craft the custom reassignment plan in a json file:

$ cat increase-replication-factor.json
{"version":1,
 "partitions":[{"topic":"foo","partition":0,"replicas":[5,6,7]}]}

Then, use the json file with the –execute option to start the reassignment process:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --execute
Current partition replica assignment

{"version":1,
 "partitions":[{"topic":"foo","partition":0,"replicas":[5],"log_dirs":["any"]}]}

Save this to use as the --reassignment-json-file option during rollback
Successfully started partition reassignment for foo-0

The –verify option can be used with the tool to check the status of the partition reassignment. Note that the same increase-replication-factor.json (used with the –execute option) should be used with the –verify option:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file increase-replication-factor.json --verify
Status of partition reassignment:
Reassignment of partition [foo,0] is completed

You can also verify the increase in replication factor with the kafka-topics.sh tool:

$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic foo --describe
Topic:foo	PartitionCount:1	ReplicationFactor:3	Configs:
  Topic: foo	Partition: 0	Leader: 5	Replicas: 5,6,7	Isr: 5,6,7

Limiting Bandwidth Usage during Data Migration

Kafka lets you apply a throttle to replication traffic, setting an upper bound on the bandwidth used to move replicas from machine to machine and from disk to disk. This is useful when rebalancing a cluster, adding or removing brokers or adding or removing disks, as it limits the impact these data-intensive operations will have on users.

There are two interfaces that can be used to engage a throttle. The simplest, and safest, is to apply a throttle when invoking the kafka-reassign-partitions.sh, but kafka-configs.sh can also be used to view and alter the throttle values directly.

So for example, if you were to execute a rebalance, with the below command, it would move partitions at no more than 50MB/s between brokers, and at no more than 100MB/s between disks on a broker.

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --execute --reassignment-json-file bigger-cluster.json --throttle 50000000 --replica-alter-log-dirs-throttle 100000000

When you execute this script you will see the throttle engage:

The inter-broker throttle limit was set to 50000000 B/s
The replica-alter-dir throttle limit was set to 100000000 B/s
Successfully started partition reassignment for foo1-0

Should you wish to alter the throttle, during a rebalance, say to increase the inter-broker throughput so it completes quicker, you can do this by re-running the execute command with the –additional option passing the same reassignment-json-file:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --additional --execute --reassignment-json-file bigger-cluster.json --throttle 700000000
The inter-broker throttle limit was set to 700000000 B/s

Once the rebalance completes the administrator can check the status of the rebalance using the –verify option. If the rebalance has completed, the throttle will be removed via the –verify command. It is important that administrators remove the throttle in a timely manner once rebalancing completes by running the command with the –verify option. Failure to do so could cause regular replication traffic to be throttled.

When the –verify option is executed, and the reassignment has completed, the script will confirm that the throttle was removed:

$ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --verify --reassignment-json-file bigger-cluster.json
Status of partition reassignment:
Reassignment of partition [my-topic,1] is completed
Reassignment of partition [my-topic,0] is completed

Clearing broker-level throttles on brokers 1,2,3
Clearing topic-level throttles on topic my-topic

The administrator can also validate the assigned configs using the kafka-configs.sh. There are two sets of throttle configuration used to manage the throttling process. First set refers to the throttle value itself. This is configured, at a broker level, using the dynamic properties:

leader.replication.throttled.rate
follower.replication.throttled.rate
replica.alter.log.dirs.io.max.bytes.per.second

Then there is the configuration pair of enumerated sets of throttled replicas:

leader.replication.throttled.replicas
follower.replication.throttled.replicas

Which are configured per topic.

All five config values are automatically assigned by kafka-reassign-partitions.sh (discussed below).

To view the throttle limit configuration:

$ bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers
Configs for brokers '2' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000,replica.alter.log.dirs.io.max.bytes.per.second=1000000000
Configs for brokers '1' are leader.replication.throttled.rate=700000000,follower.replication.throttled.rate=700000000,replica.alter.log.dirs.io.max.bytes.per.second=1000000000

This shows the throttle applied to both leader and follower side of the replication protocol (by default both sides are assigned the same throttled throughput value), as well as the disk throttle.

To view the list of throttled replicas:

$ bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type topics
Configs for topic 'my-topic' are leader.replication.throttled.replicas=1:102,0:101,
    follower.replication.throttled.replicas=1:101,0:102

Here we see the leader throttle is applied to partition 1 on broker 102 and partition 0 on broker 101. Likewise the follower throttle is applied to partition 1 on broker 101 and partition 0 on broker 102.

By default kafka-reassign-partitions.sh will apply the leader throttle to all replicas that exist before the rebalance, any one of which might be leader. It will apply the follower throttle to all move destinations. So if there is a partition with replicas on brokers 101,102, being reassigned to 102,103, a leader throttle, for that partition, would be applied to 101,102 and a follower throttle would be applied to 103 only.

If required, you can also use the –alter switch on kafka-configs.sh to alter the throttle configurations manually.

Safe usage of throttled replication

Some care should be taken when using throttled replication. In particular:

(1) Throttle Removal:

The throttle should be removed in a timely manner once reassignment completes (by running bin/kafka-reassign-partitions.sh --verify).

(2) Ensuring Progress:

If the throttle is set too low, in comparison to the incoming write rate, it is possible for replication to not make progress. This occurs when:

max(BytesInPerSec) > throttle

Where BytesInPerSec is the metric that monitors the write throughput of producers into each broker.

The administrator can monitor whether replication is making progress, during the rebalance, using the metric:

kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+)

The lag should constantly decrease during replication. If the metric does not decrease the administrator should increase the throttle throughput as described above.

Setting quotas

Quotas overrides and defaults may be configured at (user, client-id), user or client-id levels as described here. By default, clients receive an unlimited quota. It is possible to set custom quotas for each (user, client-id), user or client-id group.

Configure custom quota for (user=user1, client-id=clientA):

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
Updated config for entity: user-principal 'user1', client-id 'clientA'.

Configure custom quota for user=user1:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1
Updated config for entity: user-principal 'user1'.

Configure custom quota for client-id=clientA:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-name clientA
Updated config for entity: client-id 'clientA'.

It is possible to set default quotas for each (user, client-id), user or client-id group by specifying --entity-default option instead of --entity-name.

Configure default client-id quota for user=userA:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-name user1 --entity-type clients --entity-default
Updated config for entity: user-principal 'user1', default client-id.

Configure default quota for user:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type users --entity-default
Updated config for entity: default user-principal.

Configure default quota for client-id:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200' --entity-type clients --entity-default
Updated config for entity: default client-id.

Here’s how to describe the quota for a given (user, client-id):

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1 --entity-type clients --entity-name clientA
Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200

Describe quota for a given user:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name user1
Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200

Describe quota for a given client-id:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type clients --entity-name clientA
Configs for client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200

If entity name is not specified, all entities of the specified type are described. For example, describe all users:

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users
Configs for user-principal 'user1' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
Configs for default user-principal are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200

Similarly for (user, client):

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-type clients
Configs for user-principal 'user1', default client-id are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200
Configs for user-principal 'user1', client-id 'clientA' are producer_byte_rate=1024,consumer_byte_rate=2048,request_percentage=200

6.2 - Datacenters

Datacenters

Datacenters

Some deployments will need to manage a data pipeline that spans multiple datacenters. Our recommended approach to this is to deploy a local Kafka cluster in each datacenter, with application instances in each datacenter interacting only with their local cluster and mirroring data between clusters (see the documentation on Geo-Replication for how to do this).

This deployment pattern allows datacenters to act as independent entities and allows us to manage and tune inter-datacenter replication centrally. This allows each facility to stand alone and operate even if the inter-datacenter links are unavailable: when this occurs the mirroring falls behind until the link is restored at which time it catches up.

For applications that need a global view of all data you can use mirroring to provide clusters which have aggregate data mirrored from the local clusters in all datacenters. These aggregate clusters are used for reads by applications that require the full data set.

This is not the only possible deployment pattern. It is possible to read from or write to a remote Kafka cluster over the WAN, though obviously this will add whatever latency is required to get the cluster.

Kafka naturally batches data in both the producer and consumer so it can achieve high-throughput even over a high-latency connection. To allow this though it may be necessary to increase the TCP socket buffer sizes for the producer, consumer, and broker using the socket.send.buffer.bytes and socket.receive.buffer.bytes configurations. The appropriate way to set this is documented here.

It is generally not advisable to run a single Kafka cluster that spans multiple datacenters over a high-latency link. This will incur very high replication latency for Kafka writes, and Kafka will remain available in all locations if the network between locations is unavailable.

6.3 - Geo-Replication (Cross-Cluster Data Mirroring)

Geo-Replication (Cross-Cluster Data Mirroring)

Geo-Replication (Cross-Cluster Data Mirroring)

Geo-Replication Overview

Kafka administrators can define data flows that cross the boundaries of individual Kafka clusters, data centers, or geo-regions. Such event streaming setups are often needed for organizational, technical, or legal requirements. Common scenarios include:

  • Geo-replication
  • Disaster recovery
  • Feeding edge clusters into a central, aggregate cluster
  • Physical isolation of clusters (such as production vs. testing)
  • Cloud migration or hybrid cloud deployments
  • Legal and compliance requirements

Administrators can set up such inter-cluster data flows with Kafka’s MirrorMaker (version 2), a tool to replicate data between different Kafka environments in a streaming manner. MirrorMaker is built on top of the Kafka Connect framework and supports features such as:

  • Replicates topics (data plus configurations)
  • Replicates consumer groups including offsets to migrate applications between clusters
  • Replicates ACLs
  • Preserves partitioning
  • Automatically detects new topics and partitions
  • Provides a wide range of metrics, such as end-to-end replication latency across multiple data centers/clusters
  • Fault-tolerant and horizontally scalable operations

Note: Geo-replication with MirrorMaker replicates data across Kafka clusters. This inter-cluster replication is different from Kafka’sintra-cluster replication, which replicates data within the same Kafka cluster.

What Are Replication Flows

With MirrorMaker, Kafka administrators can replicate topics, topic configurations, consumer groups and their offsets, and ACLs from one or more source Kafka clusters to one or more target Kafka clusters, i.e., across cluster environments. In a nutshell, MirrorMaker uses Connectors to consume from source clusters and produce to target clusters.

These directional flows from source to target clusters are called replication flows. They are defined with the format {source_cluster}->{target_cluster} in the MirrorMaker configuration file as described later. Administrators can create complex replication topologies based on these flows.

Here are some example patterns:

  • Active/Active high availability deployments: A->B, B->A
  • Active/Passive or Active/Standby high availability deployments: A->B
  • Aggregation (e.g., from many clusters to one): A->K, B->K, C->K
  • Fan-out (e.g., from one to many clusters): K->A, K->B, K->C
  • Forwarding: A->B, B->C, C->D

By default, a flow replicates all topics and consumer groups (except excluded ones). However, each replication flow can be configured independently. For instance, you can define that only specific topics or consumer groups are replicated from the source cluster to the target cluster.

Here is a first example on how to configure data replication from a primary cluster to a secondary cluster (an active/passive setup):

# Basic settings
clusters = primary, secondary
primary.bootstrap.servers = broker3-primary:9092
secondary.bootstrap.servers = broker5-secondary:9092

# Define replication flows
primary->secondary.enabled = true
primary->secondary.topics = foobar-topic, quux-.*

Configuring Geo-Replication

The following sections describe how to configure and run a dedicated MirrorMaker cluster. If you want to run MirrorMaker within an existing Kafka Connect cluster or other supported deployment setups, please refer to KIP-382: MirrorMaker 2.0 and be aware that the names of configuration settings may vary between deployment modes.

Beyond what’s covered in the following sections, further examples and information on configuration settings are available at:

Configuration File Syntax

The MirrorMaker configuration file is typically named connect-mirror-maker.properties. You can configure a variety of components in this file:

  • MirrorMaker settings: global settings including cluster definitions (aliases), plus custom settings per replication flow
  • Kafka Connect and connector settings
  • Kafka producer, consumer, and admin client settings

Example: Define MirrorMaker settings (explained in more detail later).

# Global settings
clusters = us-west, us-east   # defines cluster aliases
us-west.bootstrap.servers = broker3-west:9092
us-east.bootstrap.servers = broker5-east:9092

topics = .*   # all topics to be replicated by default

# Specific replication flow settings (here: flow from us-west to us-east)
us-west->us-east.enabled = true
us-west->us.east.topics = foo.*, bar.*  # override the default above

MirrorMaker is based on the Kafka Connect framework. Any Kafka Connect, source connector, and sink connector settings as described in the documentation chapter on Kafka Connect can be used directly in the MirrorMaker configuration, without having to change or prefix the name of the configuration setting.

Example: Define custom Kafka Connect settings to be used by MirrorMaker.

# Setting Kafka Connect defaults for MirrorMaker
tasks.max = 5

Most of the default Kafka Connect settings work well for MirrorMaker out-of-the-box, with the exception of tasks.max. In order to evenly distribute the workload across more than one MirrorMaker process, it is recommended to set tasks.max to at least 2 (preferably higher) depending on the available hardware resources and the total number of topic-partitions to be replicated.

You can further customize MirrorMaker’s Kafka Connect settings per source or target cluster (more precisely, you can specify Kafka Connect worker-level configuration settings “per connector”). Use the format of {cluster}.{config_name} in the MirrorMaker configuration file.

Example: Define custom connector settings for the us-west cluster.

# us-west custom settings
us-west.offset.storage.topic = my-mirrormaker-offsets

MirrorMaker internally uses the Kafka producer, consumer, and admin clients. Custom settings for these clients are often needed. To override the defaults, use the following format in the MirrorMaker configuration file:

  • {source}.consumer.{consumer_config_name}
  • {target}.producer.{producer_config_name}
  • {source_or_target}.admin.{admin_config_name}

Example: Define custom producer, consumer, admin client settings.

# us-west cluster (from which to consume)
us-west.consumer.isolation.level = read_committed
us-west.admin.bootstrap.servers = broker57-primary:9092

# us-east cluster (to which to produce)
us-east.producer.compression.type = gzip
us-east.producer.buffer.memory = 32768
us-east.admin.bootstrap.servers = broker8-secondary:9092

Exactly once

Exactly-once semantics are supported for dedicated MirrorMaker clusters as of version 3.5.0.

For new MirrorMaker clusters, set the exactly.once.source.support property to enabled for all targeted Kafka clusters that should be written to with exactly-once semantics. For example, to enable exactly-once for writes to cluster us-east, the following configuration can be used:

us-east.exactly.once.source.support = enabled

For existing MirrorMaker clusters, a two-step upgrade is necessary. Instead of immediately setting the exactly.once.source.support property to enabled, first set it to preparing on all nodes in the cluster. Once this is complete, it can be set to enabled on all nodes in the cluster, in a second round of restarts.

In either case, it is also necessary to enable intra-cluster communication between the MirrorMaker nodes, as described in KIP-710. To do this, the dedicated.mode.enable.internal.rest property must be set to true. In addition, many of the REST-related configuration properties available for Kafka Connect can be specified the MirrorMaker config. For example, to enable intra-cluster communication in MirrorMaker cluster with each node listening on port 8080 of their local machine, the following should be added to the MirrorMaker config file:

dedicated.mode.enable.internal.rest = true
listeners = http://localhost:8080

**Note that, if intra-cluster communication is enabled in production environments, it is highly recommended to secure the REST servers brought up by each MirrorMaker node. See theconfiguration properties for Kafka Connect for information on how this can be accomplished. **

It is also recommended to filter records from aborted transactions out from replicated data when running MirrorMaker. To do this, ensure that the consumer used to read from source clusters is configured with isolation.level set to read_committed. If replicating data from cluster us-west, this can be done for all replication flows that read from that cluster by adding the following to the MirrorMaker config file:

us-west.consumer.isolation.level = read_committed

As a final note, under the hood, MirrorMaker uses Kafka Connect source connectors to replicate data. For more information on exactly-once support for these kinds of connectors, see the relevant docs page.

Creating and Enabling Replication Flows

To define a replication flow, you must first define the respective source and target Kafka clusters in the MirrorMaker configuration file.

  • clusters (required): comma-separated list of Kafka cluster “aliases”
  • {clusterAlias}.bootstrap.servers (required): connection information for the specific cluster; comma-separated list of “bootstrap” Kafka brokers

Example: Define two cluster aliases primary and secondary, including their connection information.

clusters = primary, secondary
primary.bootstrap.servers = broker10-primary:9092,broker-11-primary:9092
secondary.bootstrap.servers = broker5-secondary:9092,broker6-secondary:9092

Secondly, you must explicitly enable individual replication flows with {source}->{target}.enabled = true as needed. Remember that flows are directional: if you need two-way (bidirectional) replication, you must enable flows in both directions.

# Enable replication from primary to secondary
primary->secondary.enabled = true

By default, a replication flow will replicate all but a few special topics and consumer groups from the source cluster to the target cluster, and automatically detect any newly created topics and groups. The names of replicated topics in the target cluster will be prefixed with the name of the source cluster (see section further below). For example, the topic foo in the source cluster us-west would be replicated to a topic named us-west.foo in the target cluster us-east.

The subsequent sections explain how to customize this basic setup according to your needs.

Configuring Replication Flows

The configuration of a replication flow is a combination of top-level default settings (e.g., topics), on top of which flow-specific settings, if any, are applied (e.g., us-west->us-east.topics). To change the top-level defaults, add the respective top-level setting to the MirrorMaker configuration file. To override the defaults for a specific replication flow only, use the syntax format {source}->{target}.{config.name}.

The most important settings are:

  • topics: list of topics or a regular expression that defines which topics in the source cluster to replicate (default: topics = .*)
  • topics.exclude: list of topics or a regular expression to subsequently exclude topics that were matched by the topics setting (default: topics.exclude = .*[\-\.]internal, .*\.replica, __.*)
  • groups: list of topics or regular expression that defines which consumer groups in the source cluster to replicate (default: groups = .*)
  • groups.exclude: list of topics or a regular expression to subsequently exclude consumer groups that were matched by the groups setting (default: groups.exclude = console-consumer-.*, connect-.*, __.*)
  • {source}->{target}.enable: set to true to enable the replication flow (default: false)

Example:

# Custom top-level defaults that apply to all replication flows
topics = .*
groups = consumer-group1, consumer-group2

# Don't forget to enable a flow!
us-west->us-east.enabled = true

# Custom settings for specific replication flows
us-west->us-east.topics = foo.*
us-west->us-east.groups = bar.*
us-west->us-east.emit.heartbeats = false

Additional configuration settings are supported which can be left with their default values in most cases. See MirrorMaker Configs.

Securing Replication Flows

MirrorMaker supports the same security settings as Kafka Connect, so please refer to the linked section for further information.

Example: Encrypt communication between MirrorMaker and the us-east cluster.

us-east.security.protocol=SSL
us-east.ssl.truststore.location=/path/to/truststore.jks
us-east.ssl.truststore.password=my-secret-password
us-east.ssl.keystore.location=/path/to/keystore.jks
us-east.ssl.keystore.password=my-secret-password
us-east.ssl.key.password=my-secret-password

Custom Naming of Replicated Topics in Target Clusters

Replicated topics in a target cluster—sometimes called remote topics—are renamed according to a replication policy. MirrorMaker uses this policy to ensure that events (aka records, messages) from different clusters are not written to the same topic-partition. By default as per DefaultReplicationPolicy, the names of replicated topics in the target clusters have the format {source}.{source_topic_name}:

us-west         us-east
=========       =================
                bar-topic
foo-topic  -->  us-west.foo-topic

You can customize the separator (default: .) with the replication.policy.separator setting:

# Defining a custom separator
us-west->us-east.replication.policy.separator = _

If you need further control over how replicated topics are named, you can implement a custom ReplicationPolicy and override replication.policy.class (default is DefaultReplicationPolicy) in the MirrorMaker configuration.

Preventing Configuration Conflicts

MirrorMaker processes share configuration via their target Kafka clusters. This behavior may cause conflicts when configurations differ among MirrorMaker processes that operate against the same target cluster.

For example, the following two MirrorMaker processes would be racy:

# Configuration of process 1
A->B.enabled = true
A->B.topics = foo

# Configuration of process 2
A->B.enabled = true
A->B.topics = bar

In this case, the two processes will share configuration via cluster B, which causes a conflict. Depending on which of the two processes is the elected “leader”, the result will be that either the topic foo or the topic bar is replicated, but not both.

It is therefore important to keep the MirrorMaker configuration consistent across replication flows to the same target cluster. This can be achieved, for example, through automation tooling or by using a single, shared MirrorMaker configuration file for your entire organization.

Best Practice: Consume from Remote, Produce to Local

To minimize latency (“producer lag”), it is recommended to locate MirrorMaker processes as close as possible to their target clusters, i.e., the clusters that it produces data to. That’s because Kafka producers typically struggle more with unreliable or high-latency network connections than Kafka consumers.

First DC          Second DC
==========        =========================
primary --------- MirrorMaker --> secondary
(remote)                           (local)

To run such a “consume from remote, produce to local” setup, run the MirrorMaker processes close to and preferably in the same location as the target clusters, and explicitly set these “local” clusters in the --clusters command line parameter (blank-separated list of cluster aliases):

# Run in secondary's data center, reading from the remote `primary` cluster
$ bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters secondary

The --clusters secondary tells the MirrorMaker process that the given cluster(s) are nearby, and prevents it from replicating data or sending configuration to clusters at other, remote locations.

Example: Active/Passive High Availability Deployment

The following example shows the basic settings to replicate topics from a primary to a secondary Kafka environment, but not from the secondary back to the primary. Please be aware that most production setups will need further configuration, such as security settings.

# Unidirectional flow (one-way) from primary to secondary cluster
primary.bootstrap.servers = broker1-primary:9092
secondary.bootstrap.servers = broker2-secondary:9092

primary->secondary.enabled = true
secondary->primary.enabled = false

primary->secondary.topics = foo.*  # only replicate some topics

Example: Active/Active High Availability Deployment

The following example shows the basic settings to replicate topics between two clusters in both ways. Please be aware that most production setups will need further configuration, such as security settings.

# Bidirectional flow (two-way) between us-west and us-east clusters
clusters = us-west, us-east
us-west.bootstrap.servers = broker1-west:9092,broker2-west:9092
Us-east.bootstrap.servers = broker3-east:9092,broker4-east:9092

us-west->us-east.enabled = true
us-east->us-west.enabled = true

Note on preventing replication “loops” (where topics will be originally replicated from A to B, then the replicated topics will be replicated yet again from B to A, and so forth) : As long as you define the above flows in the same MirrorMaker configuration file, you do not need to explicitly add topics.exclude settings to prevent replication loops between the two clusters.

Example: Multi-Cluster Geo-Replication

Let’s put all the information from the previous sections together in a larger example. Imagine there are three data centers (west, east, north), with two Kafka clusters in each data center (e.g., west-1, west-2). The example in this section shows how to configure MirrorMaker (1) for Active/Active replication within each data center, as well as (2) for Cross Data Center Replication (XDCR).

First, define the source and target clusters along with their replication flows in the configuration:

# Basic settings
clusters: west-1, west-2, east-1, east-2, north-1, north-2
west-1.bootstrap.servers = ...
west-2.bootstrap.servers = ...
east-1.bootstrap.servers = ...
east-2.bootstrap.servers = ...
north-1.bootstrap.servers = ...
north-2.bootstrap.servers = ...

# Replication flows for Active/Active in West DC
west-1->west-2.enabled = true
west-2->west-1.enabled = true

# Replication flows for Active/Active in East DC
east-1->east-2.enabled = true
east-2->east-1.enabled = true

# Replication flows for Active/Active in North DC
north-1->north-2.enabled = true
north-2->north-1.enabled = true

# Replication flows for XDCR via west-1, east-1, north-1
west-1->east-1.enabled  = true
west-1->north-1.enabled = true
east-1->west-1.enabled  = true
east-1->north-1.enabled = true
north-1->west-1.enabled = true
north-1->east-1.enabled = true

Then, in each data center, launch one or more MirrorMaker as follows:

# In West DC:
$ bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters west-1 west-2

# In East DC:
$ bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters east-1 east-2

# In North DC:
$ bin/connect-mirror-maker.sh connect-mirror-maker.properties --clusters north-1 north-2

With this configuration, records produced to any cluster will be replicated within the data center, as well as across to other data centers. By providing the --clusters parameter, we ensure that each MirrorMaker process produces data to nearby clusters only.

Note: The --clusters parameter is, technically, not required here. MirrorMaker will work fine without it. However, throughput may suffer from “producer lag” between data centers, and you may incur unnecessary data transfer costs.

Starting Geo-Replication

You can run as few or as many MirrorMaker processes (think: nodes, servers) as needed. Because MirrorMaker is based on Kafka Connect, MirrorMaker processes that are configured to replicate the same Kafka clusters run in a distributed setup: They will find each other, share configuration (see section below), load balance their work, and so on. If, for example, you want to increase the throughput of replication flows, one option is to run additional MirrorMaker processes in parallel.

To start a MirrorMaker process, run the command:

$ bin/connect-mirror-maker.sh connect-mirror-maker.properties

After startup, it may take a few minutes until a MirrorMaker process first begins to replicate data.

Optionally, as described previously, you can set the parameter --clusters to ensure that the MirrorMaker process produces data to nearby clusters only.

# Note: The cluster alias us-west must be defined in the configuration file
$ bin/connect-mirror-maker.sh connect-mirror-maker.properties \
    --clusters us-west

Note when testing replication of consumer groups: By default, MirrorMaker does not replicate consumer groups created by the kafka-console-consumer.sh tool, which you might use to test your MirrorMaker setup on the command line. If you do want to replicate these consumer groups as well, set the groups.exclude configuration accordingly (default: groups.exclude = console-consumer-.*, connect-.*, __.*). Remember to update the configuration again once you completed your testing.

Stopping Geo-Replication

You can stop a running MirrorMaker process by sending a SIGTERM signal with the command:

$ kill <MirrorMaker pid>

Applying Configuration Changes

To make configuration changes take effect, the MirrorMaker process(es) must be restarted.

Monitoring Geo-Replication

It is recommended to monitor MirrorMaker processes to ensure all defined replication flows are up and running correctly. MirrorMaker is built on the Connect framework and inherits all of Connect’s metrics, such source-record-poll-rate. In addition, MirrorMaker produces its own metrics under the kafka.connect.mirror metric group. Metrics are tagged with the following properties:

  • source: alias of source cluster (e.g., primary)
  • target: alias of target cluster (e.g., secondary)
  • topic: replicated topic on target cluster
  • partition: partition being replicated

Metrics are tracked for each replicated topic. The source cluster can be inferred from the topic name. For example, replicating topic1 from primary->secondary will yield metrics like:

  • target=secondary
  • topic=primary.topic1
  • partition=1

The following metrics are emitted:

# MBean: kafka.connect.mirror:type=MirrorSourceConnector,target=([-.w]+),topic=([-.w]+),partition=([0-9]+)
record-count            # number of records replicated source -> target
record-age-ms           # age of records when they are replicated
record-age-ms-min
record-age-ms-max
record-age-ms-avg
replication-latency-ms  # time it takes records to propagate source->target
replication-latency-ms-min
replication-latency-ms-max
replication-latency-ms-avg
byte-rate               # average number of bytes/sec in replicated records

# MBean: kafka.connect.mirror:type=MirrorCheckpointConnector,source=([-.w]+),target=([-.w]+)

checkpoint-latency-ms   # time it takes to replicate consumer offsets
checkpoint-latency-ms-min
checkpoint-latency-ms-max
checkpoint-latency-ms-avg

These metrics do not differentiate between created-at and log-append timestamps.

6.4 - Multi-Tenancy

Multi-Tenancy

Multi-Tenancy

Multi-Tenancy Overview

As a highly scalable event streaming platform, Kafka is used by many users as their central nervous system, connecting in real-time a wide range of different systems and applications from various teams and lines of businesses. Such multi-tenant cluster environments command proper control and management to ensure the peaceful coexistence of these different needs. This section highlights features and best practices to set up such shared environments, which should help you operate clusters that meet SLAs/OLAs and that minimize potential collateral damage caused by “noisy neighbors”.

Multi-tenancy is a many-sided subject, including but not limited to:

  • Creating user spaces for tenants (sometimes called namespaces)
  • Configuring topics with data retention policies and more
  • Securing topics and clusters with encryption, authentication, and authorization
  • Isolating tenants with quotas and rate limits
  • Monitoring and metering
  • Inter-cluster data sharing (cf. geo-replication)

Creating User Spaces (Namespaces) For Tenants With Topic Naming

Kafka administrators operating a multi-tenant cluster typically need to define user spaces for each tenant. For the purpose of this section, “user spaces” are a collection of topics, which are grouped together under the management of a single entity or user.

In Kafka, the main unit of data is the topic. Users can create and name each topic. They can also delete them, but it is not possible to rename a topic directly. Instead, to rename a topic, the user must create a new topic, move the messages from the original topic to the new, and then delete the original. With this in mind, it is recommended to define logical spaces, based on an hierarchical topic naming structure. This setup can then be combined with security features, such as prefixed ACLs, to isolate different spaces and tenants, while also minimizing the administrative overhead for securing the data in the cluster.

These logical user spaces can be grouped in different ways, and the concrete choice depends on how your organization prefers to use your Kafka clusters. The most common groupings are as follows.

By team or organizational unit: Here, the team is the main aggregator. In an organization where teams are the main user of the Kafka infrastructure, this might be the best grouping.

Example topic naming structure:

  • <organization>.<team>.<dataset>.<event-name>
    (e.g., “acme.infosec.telemetry.logins”)

By project or product: Here, a team manages more than one project. Their credentials will be different for each project, so all the controls and settings will always be project related.

Example topic naming structure:

  • <project>.<product>.<event-name>
    (e.g., “mobility.payments.suspicious”)

Certain information should normally not be put in a topic name, such as information that is likely to change over time (e.g., the name of the intended consumer) or that is a technical detail or metadata that is available elsewhere (e.g., the topic’s partition count and other configuration settings).

To enforce a topic naming structure, several options are available:

  • Use prefix ACLs (cf. KIP-290) to enforce a common prefix for topic names. For example, team A may only be permitted to create topics whose names start with payments.teamA..
  • Define a custom CreateTopicPolicy (cf. KIP-108 and the setting create.topic.policy.class.name) to enforce strict naming patterns. These policies provide the most flexibility and can cover complex patterns and rules to match an organization’s needs.
  • Disable topic creation for normal users by denying it with an ACL, and then rely on an external process to create topics on behalf of users (e.g., scripting or your favorite automation toolkit).
  • It may also be useful to disable the Kafka feature to auto-create topics on demand by setting auto.create.topics.enable=false in the broker configuration. Note that you should not rely solely on this option.

Configuring Topics: Data Retention And More

Kafka’s configuration is very flexible due to its fine granularity, and it supports a plethora of per-topic configuration settings to help administrators set up multi-tenant clusters. For example, administrators often need to define data retention policies to control how much and/or for how long data will be stored in a topic, with settings such as retention.bytes (size) and retention.ms (time). This limits storage consumption within the cluster, and helps complying with legal requirements such as GDPR.

Securing Clusters and Topics: Authentication, Authorization, Encryption

Because the documentation has a dedicated chapter on security that applies to any Kafka deployment, this section focuses on additional considerations for multi-tenant environments.

Security settings for Kafka fall into three main categories, which are similar to how administrators would secure other client-server data systems, like relational databases and traditional messaging systems.

  1. Encryption of data transferred between Kafka brokers and Kafka clients, between brokers, and between brokers and other optional tools.
  2. Authentication of connections from Kafka clients and applications to Kafka brokers, as well as connections between Kafka brokers.
  3. Authorization of client operations such as creating, deleting, and altering the configuration of topics; writing events to or reading events from a topic; creating and deleting ACLs. Administrators can also define custom policies to put in place additional restrictions, such as a CreateTopicPolicy and AlterConfigPolicy (see KIP-108 and the settings create.topic.policy.class.name, alter.config.policy.class.name).

When securing a multi-tenant Kafka environment, the most common administrative task is the third category (authorization), i.e., managing the user/client permissions that grant or deny access to certain topics and thus to the data stored by users within a cluster. This task is performed predominantly through the setting of access control lists (ACLs). Here, administrators of multi-tenant environments in particular benefit from putting a hierarchical topic naming structure in place as described in a previous section, because they can conveniently control access to topics through prefixed ACLs (--resource-pattern-type Prefixed). This significantly minimizes the administrative overhead of securing topics in multi-tenant environments: administrators can make their own trade-offs between higher developer convenience (more lenient permissions, using fewer and broader ACLs) vs. tighter security (more stringent permissions, using more and narrower ACLs).

In the following example, user Alice—a new member of ACME corporation’s InfoSec team—is granted write permissions to all topics whose names start with “acme.infosec.”, such as “acme.infosec.telemetry.logins” and “acme.infosec.syslogs.events”.

# Grant permissions to user Alice
$ bin/kafka-acls.sh \
    --bootstrap-server localhost:9092 \
    --add --allow-principal User:Alice \
    --producer \
    --resource-pattern-type prefixed --topic acme.infosec.

You can similarly use this approach to isolate different customers on the same shared cluster.

Isolating Tenants: Quotas, Rate Limiting, Throttling

Multi-tenant clusters should generally be configured with quotas, which protect against users (tenants) eating up too many cluster resources, such as when they attempt to write or read very high volumes of data, or create requests to brokers at an excessively high rate. This may cause network saturation, monopolize broker resources, and impact other clients—all of which you want to avoid in a shared environment.

Client quotas: Kafka supports different types of (per-user principal) client quotas. Because a client’s quotas apply irrespective of which topics the client is writing to or reading from, they are a convenient and effective tool to allocate resources in a multi-tenant cluster. Request rate quotas, for example, help to limit a user’s impact on broker CPU usage by limiting the time a broker spends on the request handling path for that user, after which throttling kicks in. In many situations, isolating users with request rate quotas has a bigger impact in multi-tenant clusters than setting incoming/outgoing network bandwidth quotas, because excessive broker CPU usage for processing requests reduces the effective bandwidth the broker can serve. Furthermore, administrators can also define quotas on topic operations—such as create, delete, and alter—to prevent Kafka clusters from being overwhelmed by highly concurrent topic operations (see KIP-599 and the quota type controller_mutation_rate).

Server quotas: Kafka also supports different types of broker-side quotas. For example, administrators can set a limit on the rate with which the broker accepts new connections, set the maximum number of connections per broker, or set the maximum number of connections allowed from a specific IP address.

For more information, please refer to the quota overview and how to set quotas.

Monitoring and Metering

Monitoring is a broader subject that is covered elsewhere in the documentation. Administrators of any Kafka environment, but especially multi-tenant ones, should set up monitoring according to these instructions. Kafka supports a wide range of metrics, such as the rate of failed authentication attempts, request latency, consumer lag, total number of consumer groups, metrics on the quotas described in the previous section, and many more.

For example, monitoring can be configured to track the size of topic-partitions (with the JMX metric kafka.log.Log.Size.<TOPIC-NAME>), and thus the total size of data stored in a topic. You can then define alerts when tenants on shared clusters are getting close to using too much storage space.

Multi-Tenancy and Geo-Replication

Kafka lets you share data across different clusters, which may be located in different geographical regions, data centers, and so on. Apart from use cases such as disaster recovery, this functionality is useful when a multi-tenant setup requires inter-cluster data sharing. See the section Geo-Replication (Cross-Cluster Data Mirroring) for more information.

Further considerations

Data contracts: You may need to define data contracts between the producers and the consumers of data in a cluster, using event schemas. This ensures that events written to Kafka can always be read properly again, and prevents malformed or corrupt events being written. The best way to achieve this is to deploy a so-called schema registry alongside the cluster. (Kafka does not include a schema registry, but there are third-party implementations available.) A schema registry manages the event schemas and maps the schemas to topics, so that producers know which topics are accepting which types (schemas) of events, and consumers know how to read and parse events in a topic. Some registry implementations provide further functionality, such as schema evolution, storing a history of all schemas, and schema compatibility settings.

6.5 - Java Version

Java Version

Java Version

Java 17 and Java 21 are fully supported while Java 11 is supported for a subset of modules (clients, streams and related). Support for versions newer than the most recent LTS version are best-effort and the project typically only tests with the most recent non LTS version.

We generally recommend running Apache Kafka with the most recent LTS release (Java 21 at the time of writing) for performance, efficiency and support reasons. From a security perspective, we recommend the latest released patch version as older versions typically have disclosed security vulnerabilities.

Typical arguments for running Kafka with OpenJDK-based Java implementations (including Oracle JDK) are:

-Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
-XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
-XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent

For reference, here are the stats for one of LinkedIn’s busiest clusters (at peak) that uses said Java arguments:

  • 60 brokers
  • 50k partitions (replication factor 2)
  • 800k messages/sec in
  • 300 MB/sec inbound, 1 GB/sec+ outbound

All of the brokers in that cluster have a 90% GC pause time of about 21ms with less than 1 young GC per second.

6.6 - Hardware and OS

Hardware and OS

Hardware and OS

We are using dual quad-core Intel Xeon machines with 24GB of memory.

You need sufficient memory to buffer active readers and writers. You can do a back-of-the-envelope estimate of memory needs by assuming you want to be able to buffer for 30 seconds and compute your memory need as write_throughput*30.

The disk throughput is important. We have 8x7200 rpm SATA drives. In general disk throughput is the performance bottleneck, and more disks is better. Depending on how you configure flush behavior you may or may not benefit from more expensive disks (if you force flush often then higher RPM SAS drives may be better).

OS

Kafka should run well on any unix system and has been tested on Linux and Solaris.

We have seen a few issues running on Windows and Windows is not currently a well supported platform though we would be happy to change that.

It is unlikely to require much OS-level tuning, but there are three potentially important OS-level configurations:

  • File descriptor limits: Kafka uses file descriptors for log segments and open connections. If a broker hosts many partitions, consider that the broker needs at least (number_of_partitions)*(partition_size/segment_size) to track all log segments in addition to the number of connections the broker makes. We recommend at least 100000 allowed file descriptors for the broker processes as a starting point. Note: The mmap() function adds an extra reference to the file associated with the file descriptor fildes which is not removed by a subsequent close() on that file descriptor. This reference is removed when there are no more mappings to the file.
  • Max socket buffer size: can be increased to enable high-performance data transfer between data centers as described here.
  • Maximum number of memory map areas a process may have (aka vm.max_map_count). See the Linux kernel documentation. You should keep an eye at this OS-level property when considering the maximum number of partitions a broker may have. By default, on a number of Linux systems, the value of vm.max_map_count is somewhere around 65535. Each log segment, allocated per partition, requires a pair of index/timeindex files, and each of these files consumes 1 map area. In other words, each log segment uses 2 map areas. Thus, each partition requires minimum 2 map areas, as long as it hosts a single log segment. That is to say, creating 50000 partitions on a broker will result allocation of 100000 map areas and likely cause broker crash with OutOfMemoryError (Map failed) on a system with default vm.max_map_count. Keep in mind that the number of log segments per partition varies depending on the segment size, load intensity, retention policy and, generally, tends to be more than one.

Disks and Filesystem

We recommend using multiple drives to get good throughput and not sharing the same drives used for Kafka data with application logs or other OS filesystem activity to ensure good latency. You can either RAID these drives together into a single volume or format and mount each drive as its own directory. Since Kafka has replication the redundancy provided by RAID can also be provided at the application level. This choice has several tradeoffs.

If you configure multiple data directories partitions will be assigned round-robin to data directories. Each partition will be entirely in one of the data directories. If data is not well balanced among partitions this can lead to load imbalance between disks.

RAID can potentially do better at balancing load between disks (although it doesn’t always seem to) because it balances load at a lower level. The primary downside of RAID is that it is usually a big performance hit for write throughput and reduces the available disk space.

Another potential benefit of RAID is the ability to tolerate disk failures. However our experience has been that rebuilding the RAID array is so I/O intensive that it effectively disables the server, so this does not provide much real availability improvement.

Application vs. OS Flush Management

Kafka always immediately writes all data to the filesystem and supports the ability to configure the flush policy that controls when data is forced out of the OS cache and onto disk using the flush. This flush policy can be controlled to force data to disk after a period of time or after a certain number of messages has been written. There are several choices in this configuration.

Kafka must eventually call fsync to know that data was flushed. When recovering from a crash for any log segment not known to be fsync’d Kafka will check the integrity of each message by checking its CRC and also rebuild the accompanying offset index file as part of the recovery process executed on startup.

Note that durability in Kafka does not require syncing data to disk, as a failed node will always recover from its replicas.

We recommend using the default flush settings which disable application fsync entirely. This means relying on the background flush done by the OS and Kafka’s own background flush. This provides the best of all worlds for most uses: no knobs to tune, great throughput and latency, and full recovery guarantees. We generally feel that the guarantees provided by replication are stronger than sync to local disk, however the paranoid still may prefer having both and application level fsync policies are still supported.

The drawback of using application level flush settings is that it is less efficient in its disk usage pattern (it gives the OS less leeway to re-order writes) and it can introduce latency as fsync in most Linux filesystems blocks writes to the file whereas the background flushing does much more granular page-level locking.

In general you don’t need to do any low-level tuning of the filesystem, but in the next few sections we will go over some of this in case it is useful.

Understanding Linux OS Flush Behavior

In Linux, data written to the filesystem is maintained in pagecache until it must be written out to disk (due to an application-level fsync or the OS’s own flush policy). The flushing of data is done by a set of background threads called pdflush (or in post 2.6.32 kernels “flusher threads”).

Pdflush has a configurable policy that controls how much dirty data can be maintained in cache and for how long before it must be written back to disk. This policy is described here. When Pdflush cannot keep up with the rate of data being written it will eventually cause the writing process to block incurring latency in the writes to slow down the accumulation of data.

You can see the current state of OS memory usage by doing

$ cat /proc/meminfo

The meaning of these values are described in the link above.

Using pagecache has several advantages over an in-process cache for storing data that will be written out to disk:

  • The I/O scheduler will batch together consecutive small writes into bigger physical writes which improves throughput.
  • The I/O scheduler will attempt to re-sequence writes to minimize movement of the disk head which improves throughput.
  • It automatically uses all the free memory on the machine

Filesystem Selection

Kafka uses regular files on disk, and as such it has no hard dependency on a specific filesystem. The two filesystems which have the most usage, however, are EXT4 and XFS. Historically, EXT4 has had more usage, but recent improvements to the XFS filesystem have shown it to have better performance characteristics for Kafka’s workload with no compromise in stability.

Comparison testing was performed on a cluster with significant message loads, using a variety of filesystem creation and mount options. The primary metric in Kafka that was monitored was the “Request Local Time”, indicating the amount of time append operations were taking. XFS resulted in much better local times (160ms vs. 250ms+ for the best EXT4 configuration), as well as lower average wait times. The XFS performance also showed less variability in disk performance.

General Filesystem Notes

For any filesystem used for data directories, on Linux systems, the following options are recommended to be used at mount time:

  • noatime: This option disables updating of a file’s atime (last access time) attribute when the file is read. This can eliminate a significant number of filesystem writes, especially in the case of bootstrapping consumers. Kafka does not rely on the atime attributes at all, so it is safe to disable this.

XFS Notes

The XFS filesystem has a significant amount of auto-tuning in place, so it does not require any change in the default settings, either at filesystem creation time or at mount. The only tuning parameters worth considering are:

  • largeio: This affects the preferred I/O size reported by the stat call. While this can allow for higher performance on larger disk writes, in practice it had minimal or no effect on performance.
  • nobarrier: For underlying devices that have battery-backed cache, this option can provide a little more performance by disabling periodic write flushes. However, if the underlying device is well-behaved, it will report to the filesystem that it does not require flushes, and this option will have no effect.

EXT4 Notes

EXT4 is a serviceable choice of filesystem for the Kafka data directories, however getting the most performance out of it will require adjusting several mount options. In addition, these options are generally unsafe in a failure scenario, and will result in much more data loss and corruption. For a single broker failure, this is not much of a concern as the disk can be wiped and the replicas rebuilt from the cluster. In a multiple-failure scenario, such as a power outage, this can mean underlying filesystem (and therefore data) corruption that is not easily recoverable. The following options can be adjusted:

  • data=writeback: Ext4 defaults to data=ordered which puts a strong order on some writes. Kafka does not require this ordering as it does very paranoid data recovery on all unflushed log. This setting removes the ordering constraint and seems to significantly reduce latency.
  • Disabling journaling: Journaling is a tradeoff: it makes reboots faster after server crashes but it introduces a great deal of additional locking which adds variance to write performance. Those who don’t care about reboot time and want to reduce a major source of write latency spikes can turn off journaling entirely.
  • commit=num_secs: This tunes the frequency with which ext4 commits to its metadata journal. Setting this to a lower value reduces the loss of unflushed data during a crash. Setting this to a higher value will improve throughput.
  • nobh: This setting controls additional ordering guarantees when using data=writeback mode. This should be safe with Kafka as we do not depend on write ordering and improves throughput and latency.
  • delalloc: Delayed allocation means that the filesystem avoid allocating any blocks until the physical write occurs. This allows ext4 to allocate a large extent instead of smaller pages and helps ensure the data is written sequentially. This feature is great for throughput. It does seem to involve some locking in the filesystem which adds a bit of latency variance.
  • fast_commit: Added in Linux 5.10, fast_commit is a lighter-weight journaling method which can be used with data=ordered journaling mode. Enabling it seems to significantly reduce latency.

Replace KRaft Controller Disk

When Kafka is configured to use KRaft, the controllers store the cluster metadata in the directory specified in metadata.log.dir -- or the first log directory, if metadata.log.dir is not configured. See the documentation for metadata.log.dir for details.

If the data in the cluster metadata directory is lost either because of hardware failure or the hardware needs to be replaced, care should be taken when provisioning the new controller node. The new controller node should not be formatted and started until the majority of the controllers have all of the committed data. To determine if the majority of the controllers have the committed data, run the kafka-metadata-quorum.sh tool to describe the replication status:

$ bin/kafka-metadata-quorum.sh --bootstrap-server localhost:9092 describe --replication
NodeId	DirectoryId           	LogEndOffset	Lag	LastFetchTimestamp	LastCaughtUpTimestamp	Status
1     	dDo1k_pRSD-VmReEpu383g	966         	0  	1732367153528     	1732367153528        	Leader
2     	wQWaQMJYpcifUPMBGeRHqg	966         	0  	1732367153304     	1732367153304        	Observer
...     ...             ...     ...                     ...                     ...

Check and wait until the Lag is small for a majority of the controllers. If the leader’s end offset is not increasing, you can wait until the lag is 0 for a majority; otherwise, you can pick the latest leader end offset and wait until all replicas have reached it. Check and wait until the LastFetchTimestamp and LastCaughtUpTimestamp are close to each other for the majority of the controllers. At this point it is safer to format the controller’s metadata log directory. This can be done by running the kafka-storage.sh command.

$ bin/kafka-storage.sh format --cluster-id uuid --config config/server.properties

It is possible for the bin/kafka-storage.sh format command above to fail with a message like Log directory ... is already formatted. This can happen when combined mode is used and only the metadata log directory was lost but not the others. In that case and only in that case, can you run the bin/kafka-storage.sh format command with the --ignore-formatted option.

Start the KRaft controller after formatting the log directories.

$ bin/kafka-server-start.sh config/server.properties

6.7 - Monitoring

Monitoring

Monitoring

Kafka uses Yammer Metrics for metrics reporting in the server. The Java clients use Kafka Metrics, a built-in metrics registry that minimizes transitive dependencies pulled into client applications. Both expose metrics via JMX and can be configured to report stats using pluggable stats reporters to hook up to your monitoring system.

All Kafka rate metrics have a corresponding cumulative count metric with suffix -total. For example, records-consumed-rate has a corresponding metric named records-consumed-total.

The easiest way to see the available metrics is to fire up jconsole and point it at a running kafka client or server; this will allow browsing all metrics with JMX.

Security Considerations for Remote Monitoring using JMX

Apache Kafka disables remote JMX by default. You can enable remote monitoring using JMX by setting the environment variable JMX_PORT for processes started using the CLI or standard Java system properties to enable remote JMX programmatically. You must enable security when enabling remote JMX in production scenarios to ensure that unauthorized users cannot monitor or control your broker or application as well as the platform on which these are running. Note that authentication is disabled for JMX by default in Kafka and security configs must be overridden for production deployments by setting the environment variable KAFKA_JMX_OPTS for processes started using the CLI or by setting appropriate Java system properties. See Monitoring and Management Using JMX Technology for details on securing JMX.

We do graphing and alerting on the following metrics: DescriptionMbean nameNormal value
Message in ratekafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=([-.\w]+)Incoming message rate per topic. Omitting ’topic=(…)’ will yield the all-topic rate.
Byte in rate from clientskafka.server:type=BrokerTopicMetrics,name=BytesInPerSec,topic=([-.\w]+)Byte in (from the clients) rate per topic. Omitting ’topic=(…)’ will yield the all-topic rate.
Byte in rate from other brokerskafka.server:type=BrokerTopicMetrics,name=ReplicationBytesInPerSecByte in (from the other brokers) rate across all topics.
Controller Request rate from Brokerkafka.controller:type=ControllerChannelManager,name=RequestRateAndQueueTimeMs,brokerId=([0-9]+)The rate (requests per second) at which the ControllerChannelManager takes requests from the queue of the given broker. And the time it takes for a request to stay in this queue before it is taken from the queue.
Controller Event queue sizekafka.controller:type=ControllerEventManager,name=EventQueueSizeSize of the ControllerEventManager’s queue.
Controller Event queue timekafka.controller:type=ControllerEventManager,name=EventQueueTimeMsTime that takes for any event (except the Idle event) to wait in the ControllerEventManager’s queue before being processed
Request ratekafka.network:type=RequestMetrics,name=RequestsPerSec,request={ProduceFetchConsumer
Error ratekafka.network:type=RequestMetrics,name=ErrorsPerSec,request=([-.\w]+),error=([-.\w]+)Number of errors in responses counted per-request-type, per-error-code. If a response contains multiple errors, all are counted. error=NONE indicates successful responses.
Produce request ratekafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec,topic=([-.\w]+)Produce request rate per topic. Omitting ’topic=(…)’ will yield the all-topic rate.
Fetch request ratekafka.server:type=BrokerTopicMetrics,name=TotalFetchRequestsPerSec,topic=([-.\w]+)Fetch request (from clients or followers) rate per topic. Omitting ’topic=(…)’ will yield the all-topic rate.
Failed produce request ratekafka.server:type=BrokerTopicMetrics,name=FailedProduceRequestsPerSec,topic=([-.\w]+)Failed Produce request rate per topic. Omitting ’topic=(…)’ will yield the all-topic rate.
Failed fetch request ratekafka.server:type=BrokerTopicMetrics,name=FailedFetchRequestsPerSec,topic=([-.\w]+)Failed Fetch request (from clients or followers) rate per topic. Omitting ’topic=(…)’ will yield the all-topic rate.
Request size in byteskafka.network:type=RequestMetrics,name=RequestBytes,request=([-.\w]+)Size of requests for each request type.
Temporary memory size in byteskafka.network:type=RequestMetrics,name=TemporaryMemoryBytes,request={ProduceFetch}
Message conversion timekafka.network:type=RequestMetrics,name=MessageConversionsTimeMs,request={ProduceFetch}
Message conversion ratekafka.server:type=BrokerTopicMetrics,name={ProduceFetch}MessageConversionsPerSec,topic=([-.\w]+)
Request Queue Sizekafka.network:type=RequestChannel,name=RequestQueueSizeSize of the request queue.
Byte out rate to clientskafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec,topic=([-.\w]+)Byte out (to the clients) rate per topic. Omitting ’topic=(…)’ will yield the all-topic rate.
Byte out rate to other brokerskafka.server:type=BrokerTopicMetrics,name=ReplicationBytesOutPerSecByte out (to the other brokers) rate across all topics
Rejected byte ratekafka.server:type=BrokerTopicMetrics,name=BytesRejectedPerSec,topic=([-.\w]+)Rejected byte rate per topic, due to the record batch size being greater than max.message.bytes configuration. Omitting ’topic=(…)’ will yield the all-topic rate.
Message validation failure rate due to no key specified for compacted topickafka.server:type=BrokerTopicMetrics,name=NoKeyCompactedTopicRecordsPerSec0
Message validation failure rate due to invalid magic numberkafka.server:type=BrokerTopicMetrics,name=InvalidMagicNumberRecordsPerSec0
Message validation failure rate due to incorrect crc checksumkafka.server:type=BrokerTopicMetrics,name=InvalidMessageCrcRecordsPerSec0
Message validation failure rate due to non-continuous offset or sequence number in batchkafka.server:type=BrokerTopicMetrics,name=InvalidOffsetOrSequenceRecordsPerSec0
Log flush rate and timekafka.log:type=LogFlushStats,name=LogFlushRateAndTimeMs

of offline log directories | kafka.log:type=LogManager,name=OfflineLogDirectoryCount | 0

Leader election rate | kafka.controller:type=ControllerStats,name=LeaderElectionRateAndTimeMs | non-zero when there are broker failures
Unclean leader election rate | kafka.controller:type=ControllerStats,name=UncleanLeaderElectionsPerSec | 0
Is controller active on broker | kafka.controller:type=KafkaController,name=ActiveControllerCount | only one broker in the cluster should have 1
Pending topic deletes | kafka.controller:type=KafkaController,name=TopicsToDeleteCount |
Pending replica deletes | kafka.controller:type=KafkaController,name=ReplicasToDeleteCount |
Ineligible pending topic deletes | kafka.controller:type=KafkaController,name=TopicsIneligibleToDeleteCount |
Ineligible pending replica deletes | kafka.controller:type=KafkaController,name=ReplicasIneligibleToDeleteCount |

of under replicated partitions (|ISR| < |all replicas|) | kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions | 0

of under minIsr partitions (|ISR| < min.insync.replicas) | kafka.server:type=ReplicaManager,name=UnderMinIsrPartitionCount | 0

of at minIsr partitions (|ISR| = min.insync.replicas) | kafka.server:type=ReplicaManager,name=AtMinIsrPartitionCount | 0

Producer Id counts | kafka.server:type=ReplicaManager,name=ProducerIdCount | Count of all producer ids created by transactional and idempotent producers in each replica on the broker
Partition counts | kafka.server:type=ReplicaManager,name=PartitionCount | mostly even across brokers
Offline Replica counts | kafka.server:type=ReplicaManager,name=OfflineReplicaCount | 0
Leader replica counts | kafka.server:type=ReplicaManager,name=LeaderCount | mostly even across brokers
ISR shrink rate | kafka.server:type=ReplicaManager,name=IsrShrinksPerSec | If a broker goes down, ISR for some of the partitions will shrink. When that broker is up again, ISR will be expanded once the replicas are fully caught up. Other than that, the expected value for both ISR shrink rate and expansion rate is 0.
ISR expansion rate | kafka.server:type=ReplicaManager,name=IsrExpandsPerSec | See above
Failed ISR update rate | kafka.server:type=ReplicaManager,name=FailedIsrUpdatesPerSec | 0
Max lag in messages btw follower and leader replicas | kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica | lag should be proportional to the maximum batch size of a produce request.
Lag in messages per follower replica | kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=([-.\w]+),topic=([-.\w]+),partition=([0-9]+) | lag should be proportional to the maximum batch size of a produce request.
Requests waiting in the producer purgatory | kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Produce | non-zero if ack=-1 is used
Requests waiting in the fetch purgatory | kafka.server:type=DelayedOperationPurgatory,name=PurgatorySize,delayedOperation=Fetch | size depends on fetch.wait.max.ms in the consumer
Request total time | kafka.network:type=RequestMetrics,name=TotalTimeMs,request={Produce|FetchConsumer|FetchFollower} | broken into queue, local, remote and response send time
Time the request waits in the request queue | kafka.network:type=RequestMetrics,name=RequestQueueTimeMs,request={Produce|FetchConsumer|FetchFollower} |
Time the request is processed at the leader | kafka.network:type=RequestMetrics,name=LocalTimeMs,request={Produce|FetchConsumer|FetchFollower} |
Time the request waits for the follower | kafka.network:type=RequestMetrics,name=RemoteTimeMs,request={Produce|FetchConsumer|FetchFollower} | non-zero for produce requests when ack=-1
Time the request waits in the response queue | kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request={Produce|FetchConsumer|FetchFollower} |
Time to send the response | kafka.network:type=RequestMetrics,name=ResponseSendTimeMs,request={Produce|FetchConsumer|FetchFollower} |
Number of messages the consumer lags behind the producer by. Published by the consumer, not broker. | kafka.consumer:type=consumer-fetch-manager-metrics,client-id={client-id} Attribute: records-lag-max |
The average fraction of time the network processors are idle | kafka.network:type=SocketServer,name=NetworkProcessorAvgIdlePercent | between 0 and 1, ideally > 0.3
The number of connections disconnected on a processor due to a client not re-authenticating and then using the connection beyond its expiration time for anything other than re-authentication | kafka.server:type=socket-server-metrics,listener=[SASL_PLAINTEXT|SASL_SSL],networkProcessor=<#>,name=expired-connections-killed-count | ideally 0 when re-authentication is enabled, implying there are no longer any older, pre-2.2.0 clients connecting to this (listener, processor) combination
The total number of connections disconnected, across all processors, due to a client not re-authenticating and then using the connection beyond its expiration time for anything other than re-authentication | kafka.network:type=SocketServer,name=ExpiredConnectionsKilledCount | ideally 0 when re-authentication is enabled, implying there are no longer any older, pre-2.2.0 clients connecting to this broker
The average fraction of time the request handler threads are idle | kafka.server:type=KafkaRequestHandlerPool,name=RequestHandlerAvgIdlePercent | between 0 and 1, ideally > 0.3
Bandwidth quota metrics per (user, client-id), user or client-id | kafka.server:type={Produce|Fetch},user=([-.\w]+),client-id=([-.\w]+) | Two attributes. throttle-time indicates the amount of time in ms the client was throttled. Ideally = 0. byte-rate indicates the data produce/consume rate of the client in bytes/sec. For (user, client-id) quotas, both user and client-id are specified. If per-client-id quota is applied to the client, user is not specified. If per-user quota is applied, client-id is not specified.
Request quota metrics per (user, client-id), user or client-id | kafka.server:type=Request,user=([-.\w]+),client-id=([-.\w]+) | Two attributes. throttle-time indicates the amount of time in ms the client was throttled. Ideally = 0. request-time indicates the percentage of time spent in broker network and I/O threads to process requests from client group. For (user, client-id) quotas, both user and client-id are specified. If per-client-id quota is applied to the client, user is not specified. If per-user quota is applied, client-id is not specified.
Requests exempt from throttling | kafka.server:type=Request | exempt-throttle-time indicates the percentage of time spent in broker network and I/O threads to process requests that are exempt from throttling.
Max time to load group metadata | kafka.server:type=group-coordinator-metrics,name=partition-load-time-max | maximum time, in milliseconds, it took to load offsets and group metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Avg time to load group metadata | kafka.server:type=group-coordinator-metrics,name=partition-load-time-avg | average time, in milliseconds, it took to load offsets and group metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Max time to load transaction metadata | kafka.server:type=transaction-coordinator-metrics,name=partition-load-time-max | maximum time, in milliseconds, it took to load transaction metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Avg time to load transaction metadata | kafka.server:type=transaction-coordinator-metrics,name=partition-load-time-avg | average time, in milliseconds, it took to load transaction metadata from the consumer offset partitions loaded in the last 30 seconds (including time spent waiting for the loading task to be scheduled)
Rate of transactional verification errors | kafka.server:type=AddPartitionsToTxnManager,name=VerificationFailureRate | Rate of verifications that returned in failure either from the AddPartitionsToTxn API response or through errors in the AddPartitionsToTxnManager. In steady state 0, but transient errors are expected during rolls and reassignments of the transactional state partition.
Time to verify a transactional request | kafka.server:type=AddPartitionsToTxnManager,name=VerificationTimeMs | The amount of time queueing while a possible previous request is in-flight plus the round trip to the transaction coordinator to verify (or not verify)
Number of reassigning partitions | kafka.server:type=ReplicaManager,name=ReassigningPartitions | The number of reassigning leader partitions on a broker.
Outgoing byte rate of reassignment traffic | kafka.server:type=BrokerTopicMetrics,name=ReassignmentBytesOutPerSec | 0; non-zero when a partition reassignment is in progress.
Incoming byte rate of reassignment traffic | kafka.server:type=BrokerTopicMetrics,name=ReassignmentBytesInPerSec | 0; non-zero when a partition reassignment is in progress.
Size of a partition on disk (in bytes) | kafka.log:type=Log,name=Size,topic=([-.\w]+),partition=([0-9]+) | The size of a partition on disk, measured in bytes.
Number of log segments in a partition | kafka.log:type=Log,name=NumLogSegments,topic=([-.\w]+),partition=([0-9]+) | The number of log segments in a partition.
First offset in a partition | kafka.log:type=Log,name=LogStartOffset,topic=([-.\w]+),partition=([0-9]+) | The first offset in a partition.
Last offset in a partition | kafka.log:type=Log,name=LogEndOffset,topic=([-.\w]+),partition=([0-9]+) | The last offset in a partition.
Remaining logs to recover | kafka.log:type=LogManager,name=remainingLogsToRecover | The number of remaining logs for each log.dir to be recovered.This metric provides an overview of the recovery progress for a given log directory.
Remaining segments to recover for the current recovery thread | kafka.log:type=LogManager,name=remainingSegmentsToRecover | The number of remaining segments assigned to the currently active recovery thread.
Log directory offline status | kafka.log:type=LogManager,name=LogDirectoryOffline | Indicates if a log directory is offline (1) or online (0).

Group Coordinator Monitoring

The following set of metrics are available for monitoring the group coordinator:

The Partition Count, per State | kafka.server:type=group-coordinator-metrics,name=partition-count,state={loading|active|failed} | The number of __consumer_offsets partitions hosted by the broker, broken down by state
—|—|—
Partition Maximum Loading Time | kafka.server:type=group-coordinator-metrics,name=partition-load-time-max | The maximum loading time needed to read the state from the __consumer_offsets partitions
Partition Average Loading Time | kafka.server:type=group-coordinator-metrics,name=partition-load-time-avg | The average loading time needed to read the state from the __consumer_offsets partitions
Average Thread Idle Ratio | kafka.server:type=group-coordinator-metrics,name=thread-idle-ratio-avg | The average idle ratio of the coordinator threads
Event Queue Size | kafka.server:type=group-coordinator-metrics,name=event-queue-size | The number of events waiting to be processed in the queue
Event Queue Time (Ms) | kafka.server:type=group-coordinator-metrics,name=event-queue-time-ms-[max|p50|p99|p999] | The time that an event spent waiting in the queue to be processed
Event Processing Time (Ms) | kafka.server:type=group-coordinator-metrics,name=event-processing-time-ms-[max|p50|p99|p999] | The time that an event took to be processed
Event Purgatory Time (Ms) | kafka.server:type=group-coordinator-metrics,name=event-purgatory-time-ms-[max|p50|p99|p999] | The time that an event waited in the purgatory before being completed
Batch Flush Time (Ms) | kafka.server:type=group-coordinator-metrics,name=batch-flush-time-ms-[max|p50|p99|p999] | The time that a batch took to be flushed to the local partition
Group Count, per group type | kafka.server:type=group-coordinator-metrics,name=group-count,protocol={consumer|classic} | Total number of group per group type: Classic or Consumer
Consumer Group Count, per state | kafka.server:type=group-coordinator-metrics,name=consumer-group-count,state=[empty|assigning|reconciling|stable|dead] | Total number of Consumer Groups in each state: Empty, Assigning, Reconciling, Stable, Dead
Consumer Group Rebalance Rate | kafka.server:type=group-coordinator-metrics,name=consumer-group-rebalance-rate | The rebalance rate of consumer groups
Consumer Group Rebalance Count | kafka.server:type=group-coordinator-metrics,name=consumer-group-rebalance-count | Total number of Consumer Group Rebalances
Classic Group Count | kafka.server:type=GroupMetadataManager,name=NumGroups | Total number of Classic Groups
Classic Group Count, per State | kafka.server:type=GroupMetadataManager,name=NumGroups[PreparingRebalance,CompletingRebalance,Empty,Stable,Dead] | The number of Classic Groups in each state: PreparingRebalance, CompletingRebalance, Empty, Stable, Dead
Classic Group Completed Rebalance Rate | kafka.server:type=group-coordinator-metrics,name=group-completed-rebalance-rate | The rate of classic group completed rebalances
Classic Group Completed Rebalance Count | kafka.server:type=group-coordinator-metrics,name=group-completed-rebalance-count | The total number of classic group completed rebalances
Group Offset Count | kafka.server:type=GroupMetadataManager,name=NumOffsets | Total number of committed offsets for Classic and Consumer Groups
Offset Commit Rate | kafka.server:type=group-coordinator-metrics,name=offset-commit-rate | The rate of committed offsets
Offset Commit Count | kafka.server:type=group-coordinator-metrics,name=offset-commit-count | The total number of committed offsets
Offset Expiration Rate | kafka.server:type=group-coordinator-metrics,name=offset-expiration-rate | The rate of expired offsets
Offset Expiration Count | kafka.server:type=group-coordinator-metrics,name=offset-expiration-count | The total number of expired offsets
Offset Deletion Rate | kafka.server:type=group-coordinator-metrics,name=offset-deletion-rate | The rate of administrative deleted offsets
Offset Deletion Count | kafka.server:type=group-coordinator-metrics,name=offset-deletion-count | The total number of administrative deleted offsets

Tiered Storage Monitoring

The following set of metrics are available for monitoring of the tiered storage feature:

Metric/Attribute nameDescriptionMbean name
Remote Fetch Bytes Per SecRate of bytes read from remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteFetchBytesPerSec,topic=([-.\w]+)
Remote Fetch Requests Per SecRate of read requests from remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteFetchRequestsPerSec,topic=([-.\w]+)
Remote Fetch Errors Per SecRate of read errors from remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteFetchErrorsPerSec,topic=([-.\w]+)
Remote Copy Bytes Per SecRate of bytes copied to remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteCopyBytesPerSec,topic=([-.\w]+)
Remote Copy Requests Per SecRate of write requests to remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteCopyRequestsPerSec,topic=([-.\w]+)
Remote Copy Errors Per SecRate of write errors from remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteCopyErrorsPerSec,topic=([-.\w]+)
Remote Copy Lag BytesBytes which are eligible for tiering, but are not in remote storage yet. Omitting ’topic=(…)’ will yield the all-topic sumkafka.server:type=BrokerTopicMetrics,name=RemoteCopyLagBytes,topic=([-.\w]+)
Remote Copy Lag SegmentsSegments which are eligible for tiering, but are not in remote storage yet. Omitting ’topic=(…)’ will yield the all-topic countkafka.server:type=BrokerTopicMetrics,name=RemoteCopyLagSegments,topic=([-.\w]+)
Remote Delete Requests Per SecRate of delete requests to remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteDeleteRequestsPerSec,topic=([-.\w]+)
Remote Delete Errors Per SecRate of delete errors from remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=RemoteDeleteErrorsPerSec,topic=([-.\w]+)
Remote Delete Lag BytesTiered bytes which are eligible for deletion, but have not been deleted yet. Omitting ’topic=(…)’ will yield the all-topic sumkafka.server:type=BrokerTopicMetrics,name=RemoteDeleteLagBytes,topic=([-.\w]+)
Remote Delete Lag SegmentsTiered segments which are eligible for deletion, but have not been deleted yet. Omitting ’topic=(…)’ will yield the all-topic countkafka.server:type=BrokerTopicMetrics,name=RemoteDeleteLagSegments,topic=([-.\w]+)
Build Remote Log Aux State Requests Per SecRate of requests for rebuilding the auxiliary state from remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=BuildRemoteLogAuxStateRequestsPerSec,topic=([-.\w]+)
Build Remote Log Aux State Errors Per SecRate of errors for rebuilding the auxiliary state from remote storage per topic. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=BrokerTopicMetrics,name=BuildRemoteLogAuxStateErrorsPerSec,topic=([-.\w]+)
Remote Log Size Computation TimeThe amount of time needed to compute the size of the remote log. Omitting ’topic=(…)’ will yield the all-topic timekafka.server:type=BrokerTopicMetrics,name=RemoteLogSizeComputationTime,topic=([-.\w]+)
Remote Log Size BytesThe total size of a remote log in bytes. Omitting ’topic=(…)’ will yield the all-topic sumkafka.server:type=BrokerTopicMetrics,name=RemoteLogSizeBytes,topic=([-.\w]+)
Remote Log Metadata CountThe total number of metadata entries for remote storage. Omitting ’topic=(…)’ will yield the all-topic countkafka.server:type=BrokerTopicMetrics,name=RemoteLogMetadataCount,topic=([-.\w]+)
Delayed Remote Fetch Expires Per SecThe number of expired remote fetches per second. Omitting ’topic=(…)’ will yield the all-topic ratekafka.server:type=DelayedRemoteFetchMetrics,name=ExpiresPerSec,topic=([-.\w]+)
RemoteLogReader Task Queue SizeSize of the queue holding remote storage read tasksorg.apache.kafka.storage.internals.log:type=RemoteStorageThreadPool,name=RemoteLogReaderTaskQueueSize
RemoteLogReader Avg Idle PercentAverage idle percent of thread pool for processing remote storage read tasksorg.apache.kafka.storage.internals.log:type=RemoteStorageThreadPool,name=RemoteLogReaderAvgIdlePercent
RemoteLogManager Tasks Avg Idle PercentAverage idle percent of thread pool for copying data to remote storagekafka.log.remote:type=RemoteLogManager,name=RemoteLogManagerTasksAvgIdlePercent
RemoteLogManager Avg Broker Fetch Throttle TimeThe average time in millis remote fetches was throttled by a brokerkafka.server:type=RemoteLogManager, name=remote-fetch-throttle-time-avg
RemoteLogManager Max Broker Fetch Throttle TimeThe max time in millis remote fetches was throttled by a brokerkafka.server:type=RemoteLogManager, name=remote-fetch-throttle-time-max
RemoteLogManager Avg Broker Copy Throttle TimeThe average time in millis remote copies was throttled by a brokerkafka.server:type=RemoteLogManager, name=remote-copy-throttle-time-avg
RemoteLogManager Max Broker Copy Throttle TimeThe max time in millis remote copies was throttled by a brokerkafka.server:type=RemoteLogManager, name=remote-copy-throttle-time-max

KRaft Monitoring Metrics

The set of metrics that allow monitoring of the KRaft quorum and the metadata log.
Note that some exposed metrics depend on the role of the node as defined by process.roles

KRaft Quorum Monitoring Metrics

These metrics are reported on both Controllers and Brokers in a KRaft Cluster Metric/Attribute nameDescriptionMbean name
Current StateThe current state of this member; possible values are leader, candidate, voted, follower, unattached, observer.kafka.server:type=raft-metrics
Current LeaderThe current quorum leader’s id; -1 indicates unknown.kafka.server:type=raft-metrics
Current VotedThe current voted leader’s id; -1 indicates not voted for anyone.kafka.server:type=raft-metrics
Current EpochThe current quorum epoch.kafka.server:type=raft-metrics
High WatermarkThe high watermark maintained on this member; -1 if it is unknown.kafka.server:type=raft-metrics
Log End OffsetThe current raft log end offset.kafka.server:type=raft-metrics
Number of Unknown Voter ConnectionsNumber of unknown voters whose connection information is not cached. This value of this metric is always 0.kafka.server:type=raft-metrics
Average Commit LatencyThe average time in milliseconds to commit an entry in the raft log.kafka.server:type=raft-metrics
Maximum Commit LatencyThe maximum time in milliseconds to commit an entry in the raft log.kafka.server:type=raft-metrics
Average Election LatencyThe average time in milliseconds spent on electing a new leader.kafka.server:type=raft-metrics
Maximum Election LatencyThe maximum time in milliseconds spent on electing a new leader.kafka.server:type=raft-metrics
Fetch Records RateThe average number of records fetched from the leader of the raft quorum.kafka.server:type=raft-metrics
Append Records RateThe average number of records appended per sec by the leader of the raft quorum.kafka.server:type=raft-metrics
Average Poll Idle RatioThe average fraction of time the client’s poll() is idle as opposed to waiting for the user code to process records.kafka.server:type=raft-metrics
Current Metadata VersionOutputs the feature level of the current effective metadata version.kafka.server:type=MetadataLoader,name=CurrentMetadataVersion
Metadata Snapshot Load CountThe total number of times we have loaded a KRaft snapshot since the process was started.kafka.server:type=MetadataLoader,name=HandleLoadSnapshotCount
Latest Metadata Snapshot SizeThe total size in bytes of the latest snapshot that the node has generated. If none have been generated yet, this is the size of the latest snapshot that was loaded. If no snapshots have been generated or loaded, this is 0.kafka.server:type=SnapshotEmitter,name=LatestSnapshotGeneratedBytes
Latest Metadata Snapshot AgeThe interval in milliseconds since the latest snapshot that the node has generated. If none have been generated yet, this is approximately the time delta since the process was started.kafka.server:type=SnapshotEmitter,name=LatestSnapshotGeneratedAgeMs

KRaft Controller Monitoring Metrics

Metric/Attribute nameDescriptionMbean name
Active Controller CountThe number of Active Controllers on this node. Valid values are ‘0’ or ‘1’.kafka.controller:type=KafkaController,name=ActiveControllerCount
Event Queue Time MsA Histogram of the time in milliseconds that requests spent waiting in the Controller Event Queue.kafka.controller:type=ControllerEventManager,name=EventQueueTimeMs
Event Queue Processing Time MsA Histogram of the time in milliseconds that requests spent being processed in the Controller Event Queue.kafka.controller:type=ControllerEventManager,name=EventQueueProcessingTimeMs
Fenced Broker CountThe number of fenced brokers as observed by this Controller.kafka.controller:type=KafkaController,name=FencedBrokerCount
Active Broker CountThe number of active brokers as observed by this Controller.kafka.controller:type=KafkaController,name=ActiveBrokerCount
Global Topic CountThe number of global topics as observed by this Controller.kafka.controller:type=KafkaController,name=GlobalTopicCount
Global Partition CountThe number of global partitions as observed by this Controller.kafka.controller:type=KafkaController,name=GlobalPartitionCount
Offline Partition CountThe number of offline topic partitions (non-internal) as observed by this Controller.kafka.controller:type=KafkaController,name=OfflinePartitionsCount
Preferred Replica Imbalance CountThe count of topic partitions for which the leader is not the preferred leader.kafka.controller:type=KafkaController,name=PreferredReplicaImbalanceCount
Metadata Error CountThe number of times this controller node has encountered an error during metadata log processing.kafka.controller:type=KafkaController,name=MetadataErrorCount
Last Applied Record OffsetThe offset of the last record from the cluster metadata partition that was applied by the Controller.kafka.controller:type=KafkaController,name=LastAppliedRecordOffset
Last Committed Record OffsetThe offset of the last record committed to this Controller.kafka.controller:type=KafkaController,name=LastCommittedRecordOffset
Last Applied Record TimestampThe timestamp of the last record from the cluster metadata partition that was applied by the Controller.kafka.controller:type=KafkaController,name=LastAppliedRecordTimestamp
Last Applied Record Lag MsThe difference between now and the timestamp of the last record from the cluster metadata partition that was applied by the controller. For active Controllers the value of this lag is always zero.kafka.controller:type=KafkaController,name=LastAppliedRecordLagMs
Timed-out Broker Heartbeat CountThe number of broker heartbeats that timed out on this controller since the process was started. Note that only active controllers handle heartbeats, so only they will see increases in this metric.kafka.controller:type=KafkaController,name=TimedOutBrokerHeartbeatCount
Number Of Operations Started In Event QueueThe total number of controller event queue operations that were started. This includes deferred operations.kafka.controller:type=KafkaController,name=EventQueueOperationsStartedCount
Number of Operations Timed Out In Event QueueThe total number of controller event queue operations that timed out before they could be performed.kafka.controller:type=KafkaController,name=EventQueueOperationsTimedOutCount
Number Of New Controller ElectionsCounts the number of times this node has seen a new controller elected. A transition to the “no leader” state is not counted here. If the same controller as before becomes active, that still counts.kafka.controller:type=KafkaController,name=NewActiveControllersCount

KRaft Broker Monitoring Metrics

Metric/Attribute nameDescriptionMbean name
Last Applied Record OffsetThe offset of the last record from the cluster metadata partition that was applied by the brokerkafka.server:type=broker-metadata-metrics
Last Applied Record TimestampThe timestamp of the last record from the cluster metadata partition that was applied by the broker.kafka.server:type=broker-metadata-metrics
Last Applied Record Lag MsThe difference between now and the timestamp of the last record from the cluster metadata partition that was applied by the brokerkafka.server:type=broker-metadata-metrics
Metadata Load Error CountThe number of errors encountered by the BrokerMetadataListener while loading the metadata log and generating a new MetadataDelta based on it.kafka.server:type=broker-metadata-metrics
Metadata Apply Error CountThe number of errors encountered by the BrokerMetadataPublisher while applying a new MetadataImage based on the latest MetadataDelta.kafka.server:type=broker-metadata-metrics

Common monitoring metrics for producer/consumer/connect/streams

The following metrics are available on producer/consumer/connector/streams instances. For specific metrics, please see following sections. Metric/Attribute nameDescriptionMbean name
connection-close-rateConnections closed per second in the window.kafka.[producer
connection-close-totalTotal connections closed in the window.kafka.[producer
connection-creation-rateNew connections established per second in the window.kafka.[producer
connection-creation-totalTotal new connections established in the window.kafka.[producer
network-io-rateThe average number of network operations (reads or writes) on all connections per second.kafka.[producer
network-io-totalThe total number of network operations (reads or writes) on all connections.kafka.[producer
outgoing-byte-rateThe average number of outgoing bytes sent per second to all servers.kafka.[producer
outgoing-byte-totalThe total number of outgoing bytes sent to all servers.kafka.[producer
request-rateThe average number of requests sent per second.kafka.[producer
request-totalThe total number of requests sent.kafka.[producer
request-size-avgThe average size of all requests in the window.kafka.[producer
request-size-maxThe maximum size of any request sent in the window.kafka.[producer
incoming-byte-rateBytes/second read off all sockets.kafka.[producer
incoming-byte-totalTotal bytes read off all sockets.kafka.[producer
response-rateResponses received per second.kafka.[producer
response-totalTotal responses received.kafka.[producer
select-rateNumber of times the I/O layer checked for new I/O to perform per second.kafka.[producer
select-totalTotal number of times the I/O layer checked for new I/O to perform.kafka.[producer
io-wait-time-ns-avgThe average length of time the I/O thread spent waiting for a socket ready for reads or writes in nanoseconds.kafka.[producer
io-wait-time-ns-totalThe total time the I/O thread spent waiting in nanoseconds.kafka.[producer
io-wait-ratioThe fraction of time the I/O thread spent waiting.kafka.[producer
io-time-ns-avgThe average length of time for I/O per select call in nanoseconds.kafka.[producer
io-time-ns-totalThe total time the I/O thread spent doing I/O in nanoseconds.kafka.[producer
io-ratioThe fraction of time the I/O thread spent doing I/O.kafka.[producer
connection-countThe current number of active connections.kafka.[producer
successful-authentication-rateConnections per second that were successfully authenticated using SASL or SSL.kafka.[producer
successful-authentication-totalTotal connections that were successfully authenticated using SASL or SSL.kafka.[producer
failed-authentication-rateConnections per second that failed authentication.kafka.[producer
failed-authentication-totalTotal connections that failed authentication.kafka.[producer
successful-reauthentication-rateConnections per second that were successfully re-authenticated using SASL.kafka.[producer
successful-reauthentication-totalTotal connections that were successfully re-authenticated using SASL.kafka.[producer
reauthentication-latency-maxThe maximum latency in ms observed due to re-authentication.kafka.[producer
reauthentication-latency-avgThe average latency in ms observed due to re-authentication.kafka.[producer
failed-reauthentication-rateConnections per second that failed re-authentication.kafka.[producer
failed-reauthentication-totalTotal connections that failed re-authentication.kafka.[producer
successful-authentication-no-reauth-totalTotal connections that were successfully authenticated by older, pre-2.2.0 SASL clients that do not support re-authentication. May only be non-zero.kafka.[producer

Common Per-broker metrics for producer/consumer/connect/streams

The following metrics are available on producer/consumer/connector/streams instances. For specific metrics, please see following sections. Metric/Attribute nameDescriptionMbean name
outgoing-byte-rateThe average number of outgoing bytes sent per second for a node.kafka.[producer
outgoing-byte-totalThe total number of outgoing bytes sent for a node.kafka.[producer
request-rateThe average number of requests sent per second for a node.kafka.[producer
request-totalThe total number of requests sent for a node.kafka.[producer
request-size-avgThe average size of all requests in the window for a node.kafka.[producer
request-size-maxThe maximum size of any request sent in the window for a node.kafka.[producer
incoming-byte-rateThe average number of bytes received per second for a node.kafka.[producer
incoming-byte-totalThe total number of bytes received for a node.kafka.[producer
request-latency-avgThe average request latency in ms for a node.kafka.[producer
request-latency-maxThe maximum request latency in ms for a node.kafka.[producer
response-rateResponses received per second for a node.kafka.[producer
response-totalTotal responses received for a node.kafka.[producer

Producer monitoring

The following metrics are available on producer instances. Metric/Attribute nameDescriptionMbean name
waiting-threadsThe number of user threads blocked waiting for buffer memory to enqueue their records.kafka.producer:type=producer-metrics,client-id=([-.\w]+)
buffer-total-bytesThe maximum amount of buffer memory the client can use (whether or not it is currently used).kafka.producer:type=producer-metrics,client-id=([-.\w]+)
buffer-available-bytesThe total amount of buffer memory that is not being used (either unallocated or in the free list).kafka.producer:type=producer-metrics,client-id=([-.\w]+)
buffer-exhausted-rateThe average per-second number of record sends that are dropped due to buffer exhaustionkafka.producer:type=producer-metrics,client-id=([-.\w]+)
buffer-exhausted-totalThe total number of record sends that are dropped due to buffer exhaustionkafka.producer:type=producer-metrics,client-id=([-.\w]+)
bufferpool-wait-timeThe fraction of time an appender waits for space allocation.kafka.producer:type=producer-metrics,client-id=([-.\w]+)
bufferpool-wait-ratioThe fraction of time an appender waits for space allocation.kafka.producer:type=producer-metrics,client-id=([-.\w]+)
bufferpool-wait-time-ns-totalThe total time an appender waits for space allocation in nanoseconds.kafka.producer:type=producer-metrics,client-id=([-.\w]+)
flush-time-ns-totalThe total time the Producer spent in Producer.flush in nanoseconds.kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-init-time-ns-totalThe total time the Producer spent initializing transactions in nanoseconds (for EOS).kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-begin-time-ns-totalThe total time the Producer spent in beginTransaction in nanoseconds (for EOS).kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-send-offsets-time-ns-totalThe total time the Producer spent sending offsets to transactions in nanoseconds (for EOS).kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-commit-time-ns-totalThe total time the Producer spent committing transactions in nanoseconds (for EOS).kafka.producer:type=producer-metrics,client-id=([-.\w]+)
txn-abort-time-ns-totalThe total time the Producer spent aborting transactions in nanoseconds (for EOS).kafka.producer:type=producer-metrics,client-id=([-.\w]+)
metadata-wait-time-ns-totalthe total time in nanoseconds that has spent waiting for metadata from the Kafka brokerkafka.producer:type=producer-metrics,client-id=([-.\w]+)

Producer Sender Metrics

Metric/Attribute nameDescriptionMbean name
batch-size-avgThe average number of bytes sent per partition per-request.kafka.producer:type=producer-metrics,client-id="{client-id}"
batch-size-maxThe max number of bytes sent per partition per-request.kafka.producer:type=producer-metrics,client-id="{client-id}"
batch-split-rateThe average number of batch splits per secondkafka.producer:type=producer-metrics,client-id="{client-id}"
batch-split-totalThe total number of batch splitskafka.producer:type=producer-metrics,client-id="{client-id}"
compression-rate-avgThe average compression rate of record batches, defined as the average ratio of the compressed batch size over the uncompressed size.kafka.producer:type=producer-metrics,client-id="{client-id}"
metadata-ageThe age in seconds of the current producer metadata being used.kafka.producer:type=producer-metrics,client-id="{client-id}"
produce-throttle-time-avgThe average time in ms a request was throttled by a brokerkafka.producer:type=producer-metrics,client-id="{client-id}"
produce-throttle-time-maxThe maximum time in ms a request was throttled by a brokerkafka.producer:type=producer-metrics,client-id="{client-id}"
record-error-rateThe average per-second number of record sends that resulted in errorskafka.producer:type=producer-metrics,client-id="{client-id}"
record-error-totalThe total number of record sends that resulted in errorskafka.producer:type=producer-metrics,client-id="{client-id}"
record-queue-time-avgThe average time in ms record batches spent in the send buffer.kafka.producer:type=producer-metrics,client-id="{client-id}"
record-queue-time-maxThe maximum time in ms record batches spent in the send buffer.kafka.producer:type=producer-metrics,client-id="{client-id}"
record-retry-rateThe average per-second number of retried record sendskafka.producer:type=producer-metrics,client-id="{client-id}"
record-retry-totalThe total number of retried record sendskafka.producer:type=producer-metrics,client-id="{client-id}"
record-send-rateThe average number of records sent per second.kafka.producer:type=producer-metrics,client-id="{client-id}"
record-send-totalThe total number of records sent.kafka.producer:type=producer-metrics,client-id="{client-id}"
record-size-avgThe average record sizekafka.producer:type=producer-metrics,client-id="{client-id}"
record-size-maxThe maximum record sizekafka.producer:type=producer-metrics,client-id="{client-id}"
records-per-request-avgThe average number of records per request.kafka.producer:type=producer-metrics,client-id="{client-id}"
request-latency-avgThe average request latency in mskafka.producer:type=producer-metrics,client-id="{client-id}"
request-latency-maxThe maximum request latency in mskafka.producer:type=producer-metrics,client-id="{client-id}"
requests-in-flightThe current number of in-flight requests awaiting a response.kafka.producer:type=producer-metrics,client-id="{client-id}"
byte-rateThe average number of bytes sent per second for a topic.kafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
byte-totalThe total number of bytes sent for a topic.kafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
compression-rateThe average compression rate of record batches for a topic, defined as the average ratio of the compressed batch size over the uncompressed size.kafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
record-error-rateThe average per-second number of record sends that resulted in errors for a topickafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
record-error-totalThe total number of record sends that resulted in errors for a topickafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
record-retry-rateThe average per-second number of retried record sends for a topickafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
record-retry-totalThe total number of retried record sends for a topickafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
record-send-rateThe average number of records sent per second for a topic.kafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"
record-send-totalThe total number of records sent for a topic.kafka.producer:type=producer-topic-metrics,client-id="{client-id}",topic="{topic}"

Consumer monitoring

The following metrics are available on consumer instances. Metric/Attribute nameDescriptionMbean name
time-between-poll-avgThe average delay between invocations of poll().kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
time-between-poll-maxThe max delay between invocations of poll().kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
last-poll-seconds-agoThe number of seconds since the last poll() invocation.kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
poll-idle-ratio-avgThe average fraction of time the consumer’s poll() is idle as opposed to waiting for the user code to process records.kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
committed-time-ns-totalThe total time the Consumer spent in committed in nanoseconds.kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)
commit-sync-time-ns-totalThe total time the Consumer spent committing offsets in nanoseconds (for AOS).kafka.consumer:type=consumer-metrics,client-id=([-.\w]+)

Consumer Group Metrics

Metric/Attribute nameDescriptionMbean name
commit-latency-avgThe average time taken for a commit requestkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
commit-latency-maxThe max time taken for a commit requestkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
commit-rateThe number of commit calls per secondkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
commit-totalThe total number of commit callskafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
assigned-partitionsThe number of partitions currently assigned to this consumerkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
heartbeat-response-time-maxThe max time taken to receive a response to a heartbeat requestkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
heartbeat-rateThe average number of heartbeats per secondkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
heartbeat-totalThe total number of heartbeatskafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
join-time-avgThe average time taken for a group rejoinkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
join-time-maxThe max time taken for a group rejoinkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
join-rateThe number of group joins per secondkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
join-totalThe total number of group joinskafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
sync-time-avgThe average time taken for a group synckafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
sync-time-maxThe max time taken for a group synckafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
sync-rateThe number of group syncs per secondkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
sync-totalThe total number of group syncskafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
rebalance-latency-avgThe average time taken for a group rebalancekafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
rebalance-latency-maxThe max time taken for a group rebalancekafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
rebalance-latency-totalThe total time taken for group rebalances so farkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
rebalance-totalThe total number of group rebalances participatedkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
rebalance-rate-per-hourThe number of group rebalance participated per hourkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
failed-rebalance-totalThe total number of failed group rebalanceskafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
failed-rebalance-rate-per-hourThe number of failed group rebalance event per hourkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
last-rebalance-seconds-agoThe number of seconds since the last rebalance eventkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
last-heartbeat-seconds-agoThe number of seconds since the last controller heartbeatkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-revoked-latency-avgThe average time taken by the on-partitions-revoked rebalance listener callbackkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-revoked-latency-maxThe max time taken by the on-partitions-revoked rebalance listener callbackkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-assigned-latency-avgThe average time taken by the on-partitions-assigned rebalance listener callbackkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-assigned-latency-maxThe max time taken by the on-partitions-assigned rebalance listener callbackkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-lost-latency-avgThe average time taken by the on-partitions-lost rebalance listener callbackkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)
partitions-lost-latency-maxThe max time taken by the on-partitions-lost rebalance listener callbackkafka.consumer:type=consumer-coordinator-metrics,client-id=([-.\w]+)

Consumer Fetch Metrics

Metric/Attribute nameDescriptionMbean name
bytes-consumed-rateThe average number of bytes consumed per secondkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
bytes-consumed-totalThe total number of bytes consumedkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-latency-avgThe average time taken for a fetch request.kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-latency-maxThe max time taken for any fetch request.kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-rateThe number of fetch requests per second.kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-size-avgThe average number of bytes fetched per requestkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-size-maxThe maximum number of bytes fetched per requestkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-throttle-time-avgThe average throttle time in mskafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-throttle-time-maxThe maximum throttle time in mskafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
fetch-totalThe total number of fetch requests.kafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
records-consumed-rateThe average number of records consumed per secondkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
records-consumed-totalThe total number of records consumedkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
records-lag-maxThe maximum lag in terms of number of records for any partition in this window. NOTE: This is based on current offset and not committed offsetkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
records-lead-minThe minimum lead in terms of number of records for any partition in this windowkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
records-per-request-avgThe average number of records in each requestkafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}"
bytes-consumed-rateThe average number of bytes consumed per second for a topickafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
bytes-consumed-totalThe total number of bytes consumed for a topickafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
fetch-size-avgThe average number of bytes fetched per request for a topickafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
fetch-size-maxThe maximum number of bytes fetched per request for a topickafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
records-consumed-rateThe average number of records consumed per second for a topickafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
records-consumed-totalThe total number of records consumed for a topickafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
records-per-request-avgThe average number of records in each request for a topickafka.consumer:type=consumer-fetch-manager-metrics,client-id="{client-id}",topic="{topic}"
preferred-read-replicaThe current read replica for the partition, or -1 if reading from leaderkafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"
records-lagThe latest lag of the partitionkafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"
records-lag-avgThe average lag of the partitionkafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"
records-lag-maxThe max lag of the partitionkafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"
records-leadThe latest lead of the partitionkafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"
records-lead-avgThe average lead of the partitionkafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"
records-lead-minThe min lead of the partitionkafka.consumer:type=consumer-fetch-manager-metrics,partition="{partition}",topic="{topic}",client-id="{client-id}"

Connect Monitoring

A Connect worker process contains all the producer and consumer metrics as well as metrics specific to Connect. The worker process itself has a number of metrics, while each connector and task have additional metrics.

Metric/Attribute nameDescriptionMbean name
connector-countThe number of connectors run in this worker.kafka.connect:type=connect-worker-metrics
connector-startup-attempts-totalThe total number of connector startups that this worker has attempted.kafka.connect:type=connect-worker-metrics
connector-startup-failure-percentageThe average percentage of this worker's connectors starts that failed.kafka.connect:type=connect-worker-metrics
connector-startup-failure-totalThe total number of connector starts that failed.kafka.connect:type=connect-worker-metrics
connector-startup-success-percentageThe average percentage of this worker's connectors starts that succeeded.kafka.connect:type=connect-worker-metrics
connector-startup-success-totalThe total number of connector starts that succeeded.kafka.connect:type=connect-worker-metrics
task-countThe number of tasks run in this worker.kafka.connect:type=connect-worker-metrics
task-startup-attempts-totalThe total number of task startups that this worker has attempted.kafka.connect:type=connect-worker-metrics
task-startup-failure-percentageThe average percentage of this worker's tasks starts that failed.kafka.connect:type=connect-worker-metrics
task-startup-failure-totalThe total number of task starts that failed.kafka.connect:type=connect-worker-metrics
task-startup-success-percentageThe average percentage of this worker's tasks starts that succeeded.kafka.connect:type=connect-worker-metrics
task-startup-success-totalThe total number of task starts that succeeded.kafka.connect:type=connect-worker-metrics
connector-destroyed-task-countThe number of destroyed tasks of the connector on the worker.kafka.connect:type=connect-worker-metrics,connector="{connector}"
connector-failed-task-countThe number of failed tasks of the connector on the worker.kafka.connect:type=connect-worker-metrics,connector="{connector}"
connector-paused-task-countThe number of paused tasks of the connector on the worker.kafka.connect:type=connect-worker-metrics,connector="{connector}"
connector-restarting-task-countThe number of restarting tasks of the connector on the worker.kafka.connect:type=connect-worker-metrics,connector="{connector}"
connector-running-task-countThe number of running tasks of the connector on the worker.kafka.connect:type=connect-worker-metrics,connector="{connector}"
connector-total-task-countThe number of tasks of the connector on the worker.kafka.connect:type=connect-worker-metrics,connector="{connector}"
connector-unassigned-task-countThe number of unassigned tasks of the connector on the worker.kafka.connect:type=connect-worker-metrics,connector="{connector}"
completed-rebalances-totalThe total number of rebalances completed by this worker.kafka.connect:type=connect-worker-rebalance-metrics
connect-protocolThe Connect protocol used by this clusterkafka.connect:type=connect-worker-rebalance-metrics
epochThe epoch or generation number of this worker.kafka.connect:type=connect-worker-rebalance-metrics
leader-nameThe name of the group leader.kafka.connect:type=connect-worker-rebalance-metrics
rebalance-avg-time-msThe average time in milliseconds spent by this worker to rebalance.kafka.connect:type=connect-worker-rebalance-metrics
rebalance-max-time-msThe maximum time in milliseconds spent by this worker to rebalance.kafka.connect:type=connect-worker-rebalance-metrics
rebalancingWhether this worker is currently rebalancing.kafka.connect:type=connect-worker-rebalance-metrics
time-since-last-rebalance-msThe time in milliseconds since this worker completed the most recent rebalance.kafka.connect:type=connect-worker-rebalance-metrics
connector-classThe name of the connector class.kafka.connect:type=connector-metrics,connector="{connector}"
connector-typeThe type of the connector. One of 'source' or 'sink'.kafka.connect:type=connector-metrics,connector="{connector}"
connector-versionThe version of the connector class, as reported by the connector.kafka.connect:type=connector-metrics,connector="{connector}"
statusThe status of the connector. One of 'unassigned', 'running', 'paused', 'stopped', 'failed', or 'restarting'.kafka.connect:type=connector-metrics,connector="{connector}"
batch-size-avgThe average number of records in the batches the task has processed so far.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
batch-size-maxThe number of records in the largest batch the task has processed so far.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
offset-commit-avg-time-msThe average time in milliseconds taken by this task to commit offsets.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
offset-commit-failure-percentageThe average percentage of this task's offset commit attempts that failed.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
offset-commit-max-time-msThe maximum time in milliseconds taken by this task to commit offsets.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
offset-commit-success-percentageThe average percentage of this task's offset commit attempts that succeeded.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
pause-ratioThe fraction of time this task has spent in the pause state.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
running-ratioThe fraction of time this task has spent in the running state.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
statusThe status of the connector task. One of 'unassigned', 'running', 'paused', 'failed', or 'restarting'.kafka.connect:type=connector-task-metrics,connector="{connector}",task="{task}"
offset-commit-completion-rateThe average per-second number of offset commit completions that were completed successfully.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
offset-commit-completion-totalThe total number of offset commit completions that were completed successfully.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
offset-commit-seq-noThe current sequence number for offset commits.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
offset-commit-skip-rateThe average per-second number of offset commit completions that were received too late and skipped/ignored.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
offset-commit-skip-totalThe total number of offset commit completions that were received too late and skipped/ignored.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
partition-countThe number of topic partitions assigned to this task belonging to the named sink connector in this worker.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
put-batch-avg-time-msThe average time taken by this task to put a batch of sinks records.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
put-batch-max-time-msThe maximum time taken by this task to put a batch of sinks records.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-active-countThe number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-active-count-avgThe average number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-active-count-maxThe maximum number of records that have been read from Kafka but not yet completely committed/flushed/acknowledged by the sink task.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-lag-maxThe maximum lag in terms of number of records that the sink task is behind the consumer's position for any topic partitions.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-read-rateThe average per-second number of records read from Kafka for this task belonging to the named sink connector in this worker. This is before transformations are applied.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-read-totalThe total number of records read from Kafka by this task belonging to the named sink connector in this worker, since the task was last restarted.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-send-rateThe average per-second number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker. This is after transformations are applied and excludes any records filtered out by the transformations.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
sink-record-send-totalThe total number of records output from the transformations and sent/put to this task belonging to the named sink connector in this worker, since the task was last restarted.kafka.connect:type=sink-task-metrics,connector="{connector}",task="{task}"
poll-batch-avg-time-msThe average time in milliseconds taken by this task to poll for a batch of source records.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
poll-batch-max-time-msThe maximum time in milliseconds taken by this task to poll for a batch of source records.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
source-record-active-countThe number of records that have been produced by this task but not yet completely written to Kafka.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
source-record-active-count-avgThe average number of records that have been produced by this task but not yet completely written to Kafka.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
source-record-active-count-maxThe maximum number of records that have been produced by this task but not yet completely written to Kafka.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
source-record-poll-rateThe average per-second number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
source-record-poll-totalThe total number of records produced/polled (before transformation) by this task belonging to the named source connector in this worker.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
source-record-write-rateThe average per-second number of records written to Kafka for this task belonging to the named source connector in this worker, since the task was last restarted. This is after transformations are applied, and excludes any records filtered out by the transformations.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
source-record-write-totalThe number of records output written to Kafka for this task belonging to the named source connector in this worker, since the task was last restarted. This is after transformations are applied, and excludes any records filtered out by the transformations.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
transaction-size-avgThe average number of records in the transactions the task has committed so far.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
transaction-size-maxThe number of records in the largest transaction the task has committed so far.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
transaction-size-minThe number of records in the smallest transaction the task has committed so far.kafka.connect:type=source-task-metrics,connector="{connector}",task="{task}"
deadletterqueue-produce-failuresThe number of failed writes to the dead letter queue.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"
deadletterqueue-produce-requestsThe number of attempted writes to the dead letter queue.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"
last-error-timestampThe epoch timestamp when this task last encountered an error.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"
total-errors-loggedThe number of errors that were logged.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"
total-record-errorsThe number of record processing errors in this task.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"
total-record-failuresThe number of record processing failures in this task.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"
total-records-skippedThe number of records skipped due to errors.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"
total-retriesThe number of operations retried.kafka.connect:type=task-error-metrics,connector="{connector}",task="{task}"

Streams Monitoring

A Kafka Streams instance contains all the producer and consumer metrics as well as additional metrics specific to Streams. The metrics have three recording levels: info, debug, and trace.

Note that the metrics have a 4-layer hierarchy. At the top level there are client-level metrics for each started Kafka Streams client. Each client has stream threads, with their own metrics. Each stream thread has tasks, with their own metrics. Each task has a number of processor nodes, with their own metrics. Each task also has a number of state stores and record caches, all with their own metrics.

Use the following configuration option to specify which metrics you want collected:

metrics.recording.level="info"

Client Metrics

All the following metrics have a recording level of info: Metric/Attribute nameDescriptionMbean name
versionThe version of the Kafka Streams client.kafka.streams:type=stream-metrics,client-id=([-.\w]+)
commit-idThe version control commit ID of the Kafka Streams client.kafka.streams:type=stream-metrics,client-id=([-.\w]+)
application-idThe application ID of the Kafka Streams client.kafka.streams:type=stream-metrics,client-id=([-.\w]+)
topology-descriptionThe description of the topology executed in the Kafka Streams client.kafka.streams:type=stream-metrics,client-id=([-.\w]+)
stateThe state of the Kafka Streams client as a string.kafka.streams:type=stream-metrics,client-id=([-.\w]+)
client-stateThe state of the Kafka Streams client as a number (ordinal() of the corresponding enum).kafka.streams:type=stream-metrics,client-id=([-.\w]+),process-id=([-.\w]+)
alive-stream-threadsThe current number of alive stream threads that are running or participating in rebalance.kafka.streams:type=stream-metrics,client-id=([-.\w]+)
failed-stream-threadsThe number of failed stream threads since the start of the Kafka Streams client.kafka.streams:type=stream-metrics,client-id=([-.\w]+)
recording-levelThe metric recording level as a number (0 = INFO, 1 = DEBUG, 2 = TRACE).kafka.streams:type=stream-metrics,client-id=([-.\w]+),process-id=([-.\w]+)

Thread Metrics

All the following metrics have a recording level of info: Metric/Attribute nameDescriptionMbean name
stateThe state of the thread as a string.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
thread-stateThe state of the thread as a number (ordinal() of the corresponding enum).kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+),process-id=([-.\w]+)
commit-latency-avgThe average execution time in ms, for committing, across all running tasks of this thread.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
commit-latency-maxThe maximum execution time in ms, for committing, across all running tasks of this thread.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
poll-latency-avgThe average execution time in ms, for consumer polling.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
poll-latency-maxThe maximum execution time in ms, for consumer polling.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
process-latency-avgThe average execution time in ms, for processing.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
process-latency-maxThe maximum execution time in ms, for processing.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
punctuate-latency-avgThe average execution time in ms, for punctuating.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
punctuate-latency-maxThe maximum execution time in ms, for punctuating.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
commit-rateThe average number of commits per sec.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
commit-totalThe total number of commit calls.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
poll-rateThe average number of consumer poll calls per sec.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
poll-totalThe total number of consumer poll calls.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
process-rateThe average number of processed records per sec.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
process-totalThe total number of processed records.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
punctuate-rateThe average number of punctuate calls per sec.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
punctuate-totalThe total number of punctuate calls.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
task-created-rateThe average number of tasks created per sec.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
task-created-totalThe total number of tasks created.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
task-closed-rateThe average number of tasks closed per sec.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
task-closed-totalThe total number of tasks closed.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
blocked-time-ns-totalThe total time in ns the thread spent blocked on Kafka brokers.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)
thread-start-timeThe system timestamp in ms that the thread was started.kafka.streams:type=stream-thread-metrics,thread-id=([-.\w]+)

Task Metrics

All the following metrics have a recording level of debug, except for the dropped-records-* and active-process-ratio metrics which have a recording level of info: Metric/Attribute nameDescriptionMbean name
process-latency-avgThe average execution time in ns, for processing.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
process-latency-maxThe maximum execution time in ns, for processing.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
process-rateThe average number of processed records per sec across all source processor nodes of this task.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
process-totalThe total number of processed records across all source processor nodes of this task.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
record-lateness-avgThe average observed lateness in ms of records (stream time - record timestamp).kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
record-lateness-maxThe max observed lateness in ms of records (stream time - record timestamp).kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
enforced-processing-rateThe average number of enforced processings per sec.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
enforced-processing-totalThe total number enforced processings.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
dropped-records-rateThe average number of records dropped per sec within this task.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
dropped-records-totalThe total number of records dropped within this task.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
active-process-ratioThe fraction of time the stream thread spent on processing this task among all assigned active tasks.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
input-buffer-bytes-totalThe total number of bytes accumulated by this task,kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)
cache-size-bytes-totalThe cache size in bytes accumulated by this task.kafka.streams:type=stream-task-metrics,thread-id=([-.\w]+),task-id=([-.\w]+)

Processor Node Metrics

The following metrics are only available on certain types of nodes, i.e., the process-* metrics are only available for source processor nodes, the suppression-emit-* metrics are only available for suppression operation nodes, emit-final-* metrics are only available for windowed aggregations nodes, and the record-e2e-latency-* metrics are only available for source processor nodes and terminal nodes (nodes without successor nodes). All the metrics have a recording level of debug, except for the record-e2e-latency-* metrics which have a recording level of info: Metric/Attribute nameDescriptionMbean name
bytes-consumed-totalThe total number of bytes consumed by a source processor node.kafka.streams:type=stream-topic-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)
bytes-produced-totalThe total number of bytes produced by a sink processor node.kafka.streams:type=stream-topic-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)
process-rateThe average number of records processed by a source processor node per sec.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
process-totalThe total number of records processed by a source processor node per sec.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
suppression-emit-rateThe rate of records emitted per sec that have been emitted downstream from suppression operation nodes.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
suppression-emit-totalThe total number of records that have been emitted downstream from suppression operation nodes.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
emit-final-latency-maxThe max latency in ms to emit final records when a record could be emitted.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
emit-final-latency-avgThe avg latency in ms to emit final records when a record could be emitted.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
emit-final-records-rateThe rate of records emitted per sec when records could be emitted.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
emit-final-records-totalThe total number of records emitted.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
record-e2e-latency-avgThe average end-to-end latency in ms of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
record-e2e-latency-maxThe maximum end-to-end latency in ms of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
record-e2e-latency-minThe minimum end-to-end latency in ms of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node.kafka.streams:type=stream-processor-node-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+)
records-consumed-totalThe total number of records consumed by a source processor node.kafka.streams:type=stream-topic-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)
records-produced-totalThe total number of records produced by a sink processor node.kafka.streams:type=stream-topic-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),processor-node-id=([-.\w]+),topic=([-.\w]+)

State Store Metrics

All the following metrics have a recording level of debug, except for the record-e2e-latency-* metrics which have a recording level trace and num-open-iterators which has recording level info. Note that the store-scope value is specified in StoreSupplier#metricsScope() for user’s customized state stores; for built-in state stores, currently we have:

  • in-memory-state
  • in-memory-lru-state
  • in-memory-window-state
  • in-memory-suppression (for suppression buffers)
  • rocksdb-state (for RocksDB backed key-value store)
  • rocksdb-window-state (for RocksDB backed window store)
  • rocksdb-session-state (for RocksDB backed session store)
Metrics suppression-buffer-size-avg, suppression-buffer-size-max, suppression-buffer-count-avg, and suppression-buffer-count-max are only available for suppression buffers. All other metrics are not available for suppression buffers. Metric/Attribute nameDescriptionMbean name
put-latency-avgThe average put execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-latency-maxThe maximum put execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-if-absent-latency-avgThe average put-if-absent execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-if-absent-latency-maxThe maximum put-if-absent execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
get-latency-avgThe average get execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
get-latency-maxThe maximum get execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
delete-latency-avgThe average delete execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
delete-latency-maxThe maximum delete execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-all-latency-avgThe average put-all execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-all-latency-maxThe maximum put-all execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
all-latency-avgThe average execution time in ns, from iterator create to close time.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
all-latency-max, from iterator create to close time.The maximum all operation execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
range-latency-avg, from iterator create to close time.The average range execution time in ns, from iterator create to close time.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
range-latency-max, from iterator create to close time.The maximum range execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
prefix-scan-latency-avgThe average prefix-scan execution time in ns, from iterator create to close time.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
prefix-scan-latency-maxThe maximum prefix-scan execution time in ns, from iterator create to close time.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
flush-latency-avgThe average flush execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
flush-latency-maxThe maximum flush execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
restore-latency-avgThe average restore execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
restore-latency-maxThe maximum restore execution time in ns.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-rateThe average put rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-if-absent-rateThe average put-if-absent rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
get-rateThe average get rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
delete-rateThe average delete rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
put-all-rateThe average put-all rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
all-rateThe average all operation rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
range-rateThe average range rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
prefix-scan-rateThe average prefix-scan rate per sec for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
flush-rateThe average flush rate for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
restore-rateThe average restore rate for this store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
suppression-buffer-size-avgThe average total size in bytes of the buffered data over the sampling window.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
suppression-buffer-size-maxThe maximum total size, in bytes, of the buffered data over the sampling window.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
suppression-buffer-count-avgThe average number of records buffered over the sampling window.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
suppression-buffer-count-maxThe maximum number of records buffered over the sampling window.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),in-memory-suppression-id=([-.\w]+)
record-e2e-latency-avgThe average end-to-end latency in ms of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
record-e2e-latency-maxThe maximum end-to-end latency in ms of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
record-e2e-latency-minThe minimum end-to-end latency in ms of a record, measured by comparing the record timestamp with the system time when it has been fully processed by the node.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-open-iteratorsThe current number of iterators on the store that have been created, but not yet closed.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
iterator-duration-avgThe average time in ns spent between creating an iterator and closing it.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
iterator-duration-maxThe maximum time in ns spent between creating an iterator and closing it.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
oldest-iterator-open-since-msThe system timestamp in ms the oldest still open iterator was created.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)

RocksDB Metrics

RocksDB metrics are grouped into statistics-based metrics and properties-based metrics. The former are recorded from statistics that a RocksDB state store collects whereas the latter are recorded from properties that RocksDB exposes. Statistics collected by RocksDB provide cumulative measurements over time, e.g. bytes written to the state store. Properties exposed by RocksDB provide current measurements, e.g., the amount of memory currently used. Note that the store-scope for built-in RocksDB state stores are currently the following:

  • rocksdb-state (for RocksDB backed key-value store)
  • rocksdb-window-state (for RocksDB backed window store)
  • rocksdb-session-state (for RocksDB backed session store)
RocksDB Statistics-based Metrics: All the following statistics-based metrics have a recording level of debug because collecting statistics in RocksDB may have an impact on performance. Statistics-based metrics are collected every minute from the RocksDB state stores. If a state store consists of multiple RocksDB instances, as is the case for WindowStores and SessionStores, each metric reports an aggregation over the RocksDB instances of the state store. Metric/Attribute nameDescriptionMbean name
bytes-written-rateThe average number of bytes written per sec to the RocksDB state store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
bytes-written-totalThe total number of bytes written to the RocksDB state store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
bytes-read-rateThe average number of bytes read per second from the RocksDB state store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
bytes-read-totalThe total number of bytes read from the RocksDB state store.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
memtable-bytes-flushed-rateThe average number of bytes flushed per sec from the memtable to disk.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
memtable-bytes-flushed-totalThe total number of bytes flushed from the memtable to disk.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
memtable-hit-ratioThe ratio of memtable hits relative to all lookups to the memtable.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
memtable-flush-time-avgThe average duration in ms of memtable flushes to disc.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
memtable-flush-time-minThe minimum duration of memtable flushes to disc in ms.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
memtable-flush-time-maxThe maximum duration in ms of memtable flushes to disc.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
block-cache-data-hit-ratioThe ratio of block cache hits for data blocks relative to all lookups for data blocks to the block cache.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
block-cache-index-hit-ratioThe ratio of block cache hits for index blocks relative to all lookups for index blocks to the block cache.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
block-cache-filter-hit-ratioThe ratio of block cache hits for filter blocks relative to all lookups for filter blocks to the block cache.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
write-stall-duration-avgThe average duration in ms of write stalls.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
write-stall-duration-totalThe total duration in ms of write stalls.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
bytes-read-compaction-rateThe average number of bytes read per sec during compaction.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
bytes-written-compaction-rateThe average number of bytes written per sec during compaction.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
compaction-time-avgThe average duration in ms of disc compactions.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
compaction-time-minThe minimum duration of disc compactions in ms.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
compaction-time-maxThe maximum duration in ms of disc compactions.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
number-open-filesThis metric will return constant -1 because the RocksDB’s counter NO_FILE_CLOSES has been removed in RocksDB 9.7.3kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
number-file-errors-totalThe total number of file errors occurred.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
RocksDB Properties-based Metrics: All the following properties-based metrics have a recording level of info and are recorded when the metrics are accessed. If a state store consists of multiple RocksDB instances, as is the case for WindowStores and SessionStores, each metric reports the sum over all the RocksDB instances of the state store, except for the block cache metrics block-cache-*. The block cache metrics report the sum over all RocksDB instances if each instance uses its own block cache, and they report the recorded value from only one instance if a single block cache is shared among all instances. Metric/Attribute nameDescriptionMbean name
num-immutable-mem-tableThe number of immutable memtables that have not yet been flushed.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
cur-size-active-mem-tableThe approximate size in bytes of the active memtable.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
cur-size-all-mem-tablesThe approximate size in bytes of active and unflushed immutable memtables.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
size-all-mem-tablesThe approximate size in bytes of active, unflushed immutable, and pinned immutable memtables.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-entries-active-mem-tableThe number of entries in the active memtable.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-entries-imm-mem-tablesThe number of entries in the unflushed immutable memtables.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-deletes-active-mem-tableThe number of delete entries in the active memtable.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-deletes-imm-mem-tablesThe number of delete entries in the unflushed immutable memtables.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
mem-table-flush-pendingThis metric reports 1 if a memtable flush is pending, otherwise it reports 0.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-running-flushesThe number of currently running flushes.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
compaction-pendingThis metric reports 1 if at least one compaction is pending, otherwise it reports 0.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-running-compactionsThe number of currently running compactions.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
estimate-pending-compaction-bytesThe estimated total number of bytes a compaction needs to rewrite on disk to get all levels down to under target size (only valid for level compaction).kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
total-sst-files-sizeThe total size in bytes of all SST files.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
live-sst-files-sizeThe total size in bytes of all SST files that belong to the latest LSM tree.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
num-live-versionsNumber of live versions of the LSM tree.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
block-cache-capacityThe capacity in bytes of the block cache.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
block-cache-usageThe memory size in bytes of the entries residing in block cache.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
block-cache-pinned-usageThe memory size in bytes for the entries being pinned in the block cache.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
estimate-num-keysThe estimated number of keys in the active and unflushed immutable memtables and storage.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
estimate-table-readers-memThe estimated memory in bytes used for reading SST tables, excluding memory used in block cache.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)
background-errorsThe total number of background errors.kafka.streams:type=stream-state-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),[store-scope]-id=([-.\w]+)

Record Cache Metrics

All the following metrics have a recording level of debug: Metric/Attribute nameDescriptionMbean name
hit-ratio-avgThe average cache hit ratio defined as the ratio of cache read hits over the total cache read requests.kafka.streams:type=stream-record-cache-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),record-cache-id=([-.\w]+)
hit-ratio-minThe minimum cache hit ratio.kafka.streams:type=stream-record-cache-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),record-cache-id=([-.\w]+)
hit-ratio-maxThe maximum cache hit ratio.kafka.streams:type=stream-record-cache-metrics,thread-id=([-.\w]+),task-id=([-.\w]+),record-cache-id=([-.\w]+)

Others

We recommend monitoring GC time and other stats and various server stats such as CPU utilization, I/O service time, etc. On the client side, we recommend monitoring the message/byte rate (global and per topic), request rate/size/time, and on the consumer side, max lag in messages among all partitions and min fetch request rate. For a consumer to keep up, max lag needs to be less than a threshold and min fetch rate needs to be larger than 0.

6.8 - KRaft

KRaft

KRaft

Configuration

Process Roles

In KRaft mode each Kafka server can be configured as a controller, a broker, or both using the process.roles property. This property can have the following values:

  • If process.roles is set to broker, the server acts as a broker.
  • If process.roles is set to controller, the server acts as a controller.
  • If process.roles is set to broker,controller, the server acts as both a broker and a controller.

Kafka servers that act as both brokers and controllers are referred to as “combined” servers. Combined servers are simpler to operate for small use cases like a development environment. The key disadvantage is that the controller will be less isolated from the rest of the system. For example, it is not possible to roll or scale the controllers separately from the brokers in combined mode. Combined mode is not recommended in critical deployment environments.

Controllers

In KRaft mode, specific Kafka servers are selected to be controllers. The servers selected to be controllers will participate in the metadata quorum. Each controller is either an active or a hot standby for the current active controller.

A Kafka admin will typically select 3 or 5 servers for this role, depending on factors like cost and the number of concurrent failures your system should withstand without availability impact. A majority of the controllers must be alive in order to maintain availability. With 3 controllers, the cluster can tolerate 1 controller failure; with 5 controllers, the cluster can tolerate 2 controller failures.

All of the servers in a Kafka cluster discover the active controller using the controller.quorum.bootstrap.servers property. All the controllers should be enumerated in this property. Each controller is identified with their host and port information. For example:

controller.quorum.bootstrap.servers=host1:port1,host2:port2,host3:port3

If a Kafka cluster has 3 controllers named controller1, controller2 and controller3, then controller1 may have the following configuration:

process.roles=controller
node.id=1
listeners=CONTROLLER://controller1.example.com:9093
controller.quorum.bootstrap.servers=controller1.example.com:9093,controller2.example.com:9093,controller3.example.com:9093
controller.listener.names=CONTROLLER

Every broker and controller must set the controller.quorum.bootstrap.servers property.

Provisioning Nodes

The bin/kafka-storage.sh random-uuid command can be used to generate a cluster ID for your new cluster. This cluster ID must be used when formatting each server in the cluster with the bin/kafka-storage.sh format command.

This is different from how Kafka has operated in the past. Previously, Kafka would format blank storage directories automatically, and also generate a new cluster ID automatically. One reason for the change is that auto-formatting can sometimes obscure an error condition. This is particularly important for the metadata log maintained by the controller and broker servers. If a majority of the controllers were able to start with an empty log directory, a leader might be able to be elected with missing committed data.

Bootstrap a Standalone Controller

The recommended method for creating a new KRaft controller cluster is to bootstrap it with one voter and dynamically add the rest of the controllers. Bootstrapping the first controller can be done with the following CLI command:

$ bin/kafka-storage.sh format --cluster-id <CLUSTER_ID> --standalone --config config/controller.properties

This command will 1) create a meta.properties file in metadata.log.dir with a randomly generated directory.id, 2) create a snapshot at 00000000000000000000-0000000000.checkpoint with the necessary control records (KRaftVersionRecord and VotersRecord) to make this Kafka node the only voter for the quorum.

Bootstrap with Multiple Controllers

The KRaft cluster metadata partition can also be bootstrapped with more than one voter. This can be done by using the –initial-controllers flag:

CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
CONTROLLER_0_UUID="$(bin/kafka-storage.sh random-uuid)"
CONTROLLER_1_UUID="$(bin/kafka-storage.sh random-uuid)"
CONTROLLER_2_UUID="$(bin/kafka-storage.sh random-uuid)"

# In each controller execute
bin/kafka-storage.sh format --cluster-id ${CLUSTER_ID} \
                     --initial-controllers "0@controller-0:1234:${CONTROLLER_0_UUID},1@controller-1:1234:${CONTROLLER_1_UUID},2@controller-2:1234:${CONTROLLER_2_UUID}" \
                     --config config/controller.properties

This command is similar to the standalone version but the snapshot at 00000000000000000000-0000000000.checkpoint will instead contain a VotersRecord that includes information for all of the controllers specified in –initial-controllers. It is important that the value of this flag is the same in all of the controllers with the same cluster id. In the replica description 0@controller-0:1234:3Db5QLSqSZieL3rJBUUegA, 0 is the replica id, 3Db5QLSqSZieL3rJBUUegA is the replica directory id, controller-0 is the replica’s host and 1234 is the replica’s port.

Formatting Brokers and New Controllers

When provisioning new broker and controller nodes that we want to add to an existing Kafka cluster, use the kafka-storage.sh format command with the –no-initial-controllers flag.

$ bin/kafka-storage.sh format --cluster-id <CLUSTER_ID> --config config/server.properties --no-initial-controllers

Controller membership changes

Static versus Dynamic KRaft Quorums

There are two ways to run KRaft: the old way using static controller quorums, and the new way using KIP-853 dynamic controller quorums.

When using a static quorum, the configuration file for each broker and controller must specify the IDs, hostnames, and ports of all controllers in controller.quorum.voters.

In contrast, when using a dynamic quorum, you should set controller.quorum.bootstrap.servers instead. This configuration key need not contain all the controllers, but it should contain as many as possible so that all the servers can locate the quorum. In other words, its function is much like the bootstrap.servers configuration used by Kafka clients.

If you are not sure whether you are using static or dynamic quorums, you can determine this by running something like the following:

  $ bin/kafka-features.sh --bootstrap-controller localhost:9093 describe

If the kraft.version field is level 0 or absent, you are using a static quorum. If it is 1 or above, you are using a dynamic quorum. For example, here is an example of a static quorum:

Feature: kraft.version  SupportedMinVersion: 0  SupportedMaxVersion: 1  FinalizedVersionLevel: 0 Epoch: 5
Feature: metadata.version       SupportedMinVersion: 3.3-IV3    SupportedMaxVersion: 3.9-IV0 FinalizedVersionLevel: 3.9-IV0  Epoch: 5

Here is another example of a static quorum:

Feature: metadata.version       SupportedMinVersion: 3.3-IV3    SupportedMaxVersion: 3.8-IV0 FinalizedVersionLevel: 3.8-IV0  Epoch: 5

Here is an example of a dynamic quorum:

Feature: kraft.version  SupportedMinVersion: 0  SupportedMaxVersion: 1  FinalizedVersionLevel: 1 Epoch: 5
Feature: metadata.version       SupportedMinVersion: 3.3-IV3    SupportedMaxVersion: 3.9-IV0 FinalizedVersionLevel: 3.9-IV0  Epoch: 5

The static versus dynamic nature of the quorum is determined at the time of formatting. Specifically, the quorum will be formatted as dynamic if controller.quorum.voters is not present, and if the software version is Apache Kafka 3.9 or newer. If you have followed the instructions earlier in this document, you will get a dynamic quorum.

If you would like the formatting process to fail if a dynamic quorum cannot be achieved, format your controllers using the --feature kraft.version=1. (Note that you should not supply this flag when formatting brokers – only when formatting controllers.)

  $ bin/kafka-storage.sh format -t KAFKA_CLUSTER_ID --feature kraft.version=1 -c controller_static.properties
  Cannot set kraft.version to 1 unless KIP-853 configuration is present. Try removing the --feature flag for kraft.version.

Note: Currently it is not possible to convert clusters using a static controller quorum to use a dynamic controller quorum. This function will be supported in the future release.

Add New Controller

If a dynamic controller cluster already exists, it can be expanded by first provisioning a new controller using the kafka-storage.sh tool and starting the controller. After starting the controller, the replication to the new controller can be monitored using the bin/kafka-metadata-quorum.sh describe --replication command. Once the new controller has caught up to the active controller, it can be added to the cluster using the bin/kafka-metadata-quorum.sh add-controller command. When using broker endpoints use the –bootstrap-server flag:

$ bin/kafka-metadata-quorum.sh --command-config config/controller.properties --bootstrap-server localhost:9092 add-controller

When using controller endpoints use the –bootstrap-controller flag:

$ bin/kafka-metadata-quorum.sh --command-config config/controller.properties --bootstrap-controller localhost:9093 add-controller

Remove Controller

If the dynamic controller cluster already exists, it can be shrunk using the bin/kafka-metadata-quorum.sh remove-controller command. Until KIP-996: Pre-vote has been implemented and released, it is recommended to shutdown the controller that will be removed before running the remove-controller command. When using broker endpoints use the –bootstrap-server flag:

$ bin/kafka-metadata-quorum.sh --bootstrap-server localhost:9092 remove-controller --controller-id <id> --controller-directory-id <directory-id>

When using controller endpoints use the –bootstrap-controller flag:

$ bin/kafka-metadata-quorum.sh --bootstrap-controller localhost:9092 remove-controller --controller-id <id> --controller-directory-id <directory-id>

Debugging

Metadata Quorum Tool

The kafka-metadata-quorum.sh tool can be used to describe the runtime state of the cluster metadata partition. For example, the following command displays a summary of the metadata quorum:

$ bin/kafka-metadata-quorum.sh --bootstrap-server localhost:9092 describe --status
ClusterId:              fMCL8kv1SWm87L_Md-I2hg
LeaderId:               3002
LeaderEpoch:            2
HighWatermark:          10
MaxFollowerLag:         0
MaxFollowerLagTimeMs:   -1
CurrentVoters:          [{"id": 3000, "directoryId": "ILZ5MPTeRWakmJu99uBJCA", "endpoints": ["CONTROLLER://localhost:9093"]},
                         {"id": 3001, "directoryId": "b-DwmhtOheTqZzPoh52kfA", "endpoints": ["CONTROLLER://localhost:9094"]},
                         {"id": 3002, "directoryId": "g42deArWBTRM5A1yuVpMCg", "endpoints": ["CONTROLLER://localhost:9095"]}]
CurrentObservers:       [{"id": 0, "directoryId": "3Db5QLSqSZieL3rJBUUegA"},
                         {"id": 1, "directoryId": "UegA3Db5QLSqSZieL3rJBU"},
                         {"id": 2, "directoryId": "L3rJBUUegA3Db5QLSqSZie"}]

Dump Log Tool

The kafka-dump-log.sh tool can be used to debug the log segments and snapshots for the cluster metadata directory. The tool will scan the provided files and decode the metadata records. For example, this command decodes and prints the records in the first log segment:

$ bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000000.log

This command decodes and prints the records in the a cluster metadata snapshot:

$ bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000100-0000000001.checkpoint

Metadata Shell

The kafka-metadata-shell.sh tool can be used to interactively inspect the state of the cluster metadata partition:

$ bin/kafka-metadata-shell.sh --snapshot metadata_log_dir/__cluster_metadata-0/00000000000000000000.checkpoint
>> ls /
brokers  local  metadataQuorum  topicIds  topics
>> ls /topics
foo
>> cat /topics/foo/0/data
{
  "partitionId" : 0,
  "topicId" : "5zoAlv-xEh9xRANKXt1Lbg",
  "replicas" : [ 1 ],
  "isr" : [ 1 ],
  "removingReplicas" : null,
  "addingReplicas" : null,
  "leader" : 1,
  "leaderEpoch" : 0,
  "partitionEpoch" : 0
}
>> exit

Deploying Considerations

  • Kafka server’s process.role should be set to either broker or controller but not both. Combined mode can be used in development environments, but it should be avoided in critical deployment environments.
  • For redundancy, a Kafka cluster should use 3 or more controllers, depending on factors like cost and the number of concurrent failures your system should withstand without availability impact. For the KRaft controller cluster to withstand N concurrent failures the controller cluster must include 2N + 1 controllers.
  • The Kafka controllers store all the metadata for the cluster in memory and on disk. We believe that for a typical Kafka cluster 5GB of main memory and 5GB of disk space on the metadata log director is sufficient.

ZooKeeper to KRaft Migration

In order to migrate from ZooKeeper to KRaft you need to use a bridge release. The last bridge release is Kafka 3.9. See the ZooKeeper to KRaft Migration steps in the 3.9 documentation.

6.9 - Tiered Storage

Tiered Storage

Tiered Storage

Tiered Storage Overview

Kafka data is mostly consumed in a streaming fashion using tail reads. Tail reads leverage OS’s page cache to serve the data instead of disk reads. Older data is typically read from the disk for backfill or failure recovery purposes and is infrequent.

In the tiered storage approach, Kafka cluster is configured with two tiers of storage - local and remote. The local tier is the same as the current Kafka that uses the local disks on the Kafka brokers to store the log segments. The new remote tier uses external storage systems, such as HDFS or S3, to store the completed log segments. Please check KIP-405 for more information.

Configuration

Broker Configurations

By default, Kafka server will not enable tiered storage feature. remote.log.storage.system.enable is the property to control whether to enable tiered storage functionality in a broker or not. Setting it to “true” enables this feature.

RemoteStorageManager is an interface to provide the lifecycle of remote log segments and indexes. Kafka server doesn’t provide out-of-the-box implementation of RemoteStorageManager. Configuring remote.log.storage.manager.class.name and remote.log.storage.manager.class.path to specify the implementation of RemoteStorageManager.

RemoteLogMetadataManager is an interface to provide the lifecycle of metadata about remote log segments with strongly consistent semantics. By default, Kafka provides an implementation with storage as an internal topic. This implementation can be changed by configuring remote.log.metadata.manager.class.name and remote.log.metadata.manager.class.path. When adopting the default kafka internal topic based implementation, remote.log.metadata.manager.listener.name is a mandatory property to specify which listener the clients created by the default RemoteLogMetadataManager implementation.

Topic Configurations

After correctly configuring broker side configurations for tiered storage feature, there are still configurations in topic level needed to be set. remote.storage.enable is the switch to determine if a topic wants to use tiered storage or not. By default it is set to false. After enabling remote.storage.enable property, the next thing to consider is the log retention. When tiered storage is enabled for a topic, there are 2 additional log retention configurations to set:

  • local.retention.ms
  • retention.ms
  • local.retention.bytes
  • retention.bytes

The configuration prefixed with local are to specify the time/size the “local” log file can accept before moving to remote storage, and then get deleted. If unset, The value in retention.ms and retention.bytes will be used.

Quick Start Example

Apache Kafka doesn’t provide an out-of-the-box RemoteStorageManager implementation. To have a preview of the tiered storage feature, the LocalTieredStorage implemented for integration test can be used, which will create a temporary directory in local storage to simulate the remote storage.

To adopt the LocalTieredStorage, the test library needs to be built locally

# please checkout to the specific version tag you're using before building it
# ex: `git checkout 4.0.0`
$ ./gradlew clean :storage:testJar

After build successfully, there should be a kafka-storage-x.x.x-test.jar file under storage/build/libs. Next, setting configurations in the broker side to enable tiered storage feature.

# Sample KRaft broker server.properties listening on PLAINTEXT://:9092
remote.log.storage.system.enable=true

# Setting the listener for the clients in RemoteLogMetadataManager to talk to the brokers.
remote.log.metadata.manager.listener.name=PLAINTEXT

# Please provide the implementation info for remoteStorageManager.
# This is the mandatory configuration for tiered storage.
# Here, we use the `LocalTieredStorage` built above.
remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-4.0.0-test.jar

# These 2 prefix are default values, but customizable
remote.log.storage.manager.impl.prefix=rsm.config.
remote.log.metadata.manager.impl.prefix=rlmm.config.

# Configure the directory used for `LocalTieredStorage`
# Note, please make sure the brokers need to have access to this directory
rsm.config.dir=/tmp/kafka-remote-storage

# This needs to be changed if number of brokers in the cluster is more than 1
rlmm.config.remote.log.metadata.topic.replication.factor=1

# Try to speed up the log retention check interval for testing
log.retention.check.interval.ms=1000

Following quick start guide to start up the kafka environment. Then, create a topic with tiered storage enabled with configs:

# remote.storage.enable=true -> enables tiered storage on the topic
# local.retention.ms=1000 -> The number of milliseconds to keep the local log segment before it gets deleted.
# Note that a local log segment is eligible for deletion only after it gets uploaded to remote.
# retention.ms=3600000 -> when segments exceed this time, the segments in remote storage will be deleted
# segment.bytes=1048576 -> for test only, to speed up the log segment rolling interval
# file.delete.delay.ms=10000 -> for test only, to speed up the local-log segment file delete delay

$ bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server localhost:9092 \
--config remote.storage.enable=true --config local.retention.ms=1000 --config retention.ms=3600000 \
--config segment.bytes=1048576 --config file.delete.delay.ms=1000

Try to send messages to the tieredTopic topic to roll the log segment:

$ bin/kafka-producer-perf-test.sh --topic tieredTopic --num-records 1200 --record-size 1024 --throughput -1 --producer-props bootstrap.servers=localhost:9092

Then, after the active segment is rolled, the old segment should be moved to the remote storage and get deleted. This can be verified by checking the remote log directory configured above. For example:

$ ls /tmp/kafka-remote-storage/kafka-tiered-storage/tieredTopic-0-jF8s79t9SrG_PNqlwv7bAA
00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.index
00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.snapshot
00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.leader_epoch_checkpoint
00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.timeindex
00000000000000000000-knnxbs3FSRyKdPcSAOQC-w.log

Lastly, we can try to consume some data from the beginning and print offset number, to make sure it will successfully fetch offset 0 from the remote storage.

$ bin/kafka-console-consumer.sh --topic tieredTopic --from-beginning --max-messages 1 --bootstrap-server localhost:9092 --property print.offset=true

In KRaft mode, you can disable tiered storage at the topic level, to make the remote logs as read-only logs, or completely delete all remote logs.

If you want to let the remote logs become read-only and no more local logs copied to the remote storage, you can set remote.storage.enable=true,remote.log.copy.disable=true to the topic.

Note: You also need to set local.retention.ms and local.retention.bytes to the same value as retention.ms and retention.bytes, or set to “-2”. This is because after disabling remote log copy, the local retention policies will not be applied anymore, and that might confuse users and cause unexpected disk full.

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 \
   --alter --entity-type topics --entity-name tieredTopic \
   --add-config 'remote.storage.enable=true,remote.log.copy.disable=true,local.retention.ms=-2,local.retention.bytes=-2'

If you want to completely disable tiered storage at the topic level with all remote logs deleted, you can set remote.storage.enable=false,remote.log.delete.on.disable=true to the topic.

$ bin/kafka-configs.sh --bootstrap-server localhost:9092 \
   --alter --entity-type topics --entity-name tieredTopic \
   --add-config 'remote.storage.enable=false,remote.log.delete.on.disable=true'

You can also re-enable tiered storage feature at the topic level. Please note, if you want to disable tiered storage at the cluster level, you should delete the tiered storage enabled topics explicitly. Attempting to disable tiered storage at the cluster level without deleting the topics using tiered storage will result in an exception during startup.

$ bin/kafka-topics.sh --delete --topic tieredTopic --bootstrap-server localhost:9092

After topics are deleted, you’re safe to set remote.log.storage.system.enable=false in the broker configuration.

Limitations

While the Tiered Storage works for most use cases, it is still important to be aware of the following limitations:

  • No support for compacted topics
  • Deleting tiered storage enabled topics is required before disabling tiered storage at the broker level
  • Admin actions related to tiered storage feature are only supported on clients from version 3.0 onwards
  • No support for log segments missing producer snapshot file. It can happen when topic is created before v2.8.0.

For more information, please check Kafka Tiered Storage GA Release Notes.

6.10 - Consumer Rebalance Protocol

Consumer Rebalance Protocol

Consumer Rebalance Protocol

Overview

Starting from Apache Kafka 4.0, the Next Generation of the Consumer Rebalance Protocol (KIP-848) is Generally Available (GA). It improves the scalability of consumer groups while simplifying consumers. It also decreases rebalance times, thanks to its fully incremental design, which no longer relies on a global synchronization barrier.

Consumer Groups using the new protocol are now referred to as Consumer groups, while groups using the old protocol are referred to as Classic groups. Note that Classic groups can still be used to form consumer groups using the old protocol.

Server

The new consumer protocol is automatically enabled on the server since Apache Kafka 4.0. Enabling and disabling the protocol is controlled by the group.version feature flag.

The consumer heartbeat interval and the session timeout are controlled by the server now with the following configs:

  • group.consumer.heartbeat.interval.ms
  • group.consumer.session.timeout.ms

The assignment strategy is also controlled by the server. The group.consumer.assignors configuration can be used to specify the list of available assignors for Consumer groups. By default, the uniform assignor and the range assignor are configured. The first assignor in the list is used by default unless the Consumer selects a different one. It is also possible to implement custom assignment strategies on the server side by implementing the ConsumerGroupPartitionAssignor interface and specifying the full class name in the configuration.

Consumer

Since Apache Kafka 4.0, the Consumer supports the new consumer rebalance protocol. However, the protocol is not enabled by default. The group.protocol configuration must be set to consumer to enable it. When enabled, the new consumer protocol is used alongside an improved threading model.

The group.remote.assignor configuration is introduced as an optional configuration to overwrite the default assignment strategy configured on the server side.

The subscribe(SubscriptionPattern) and subscribe(SubscriptionPattern, ConsumerRebalanceListener) methods have been added to subscribe to a regular expression with the new consumer rebalance protocol. With these methods, the regular expression uses the RE2J format and is now evaluated on the server side.

New metrics have been added to the Consumer when using the new rebalance protocol, mainly providing visibility over the improved threading model. See New Consumer Metrics.

When the new rebalance protocol is enabled, the following configurations and APIs are no longer usable:

  • heartbeat.interval.ms
  • session.timeout.ms
  • partition.assignment.strategy
  • enforceRebalance(String) and enforceRebalance()

Upgrade & Downgrade

Offline

Consumer groups are automatically converted from Classic to Consumer and vice versa when they are empty. Hence, it is possible to change the protocol used by the group by shutting down all the consumers and bringing them back up with the group.protocol=consumer configuration. The downside is that it requires taking the consumer group down.

Online

Consumer groups can be upgraded without downtime by rolling out the consumer with the group.protocol=consumer configuration. When the first consumer using the new consumer rebalance protocol joins the group, the group is converted from Classic to Consumer, and the classic rebalance protocol is interoperated to work with the new consumer rebalance protocol. This is only possible when the classic group uses an assignor that does not embed custom metadata.

Consumer groups can be downgraded using the opposite process. In this case, the group is converted from Consumer to Classic when the last consumer using the new consumer rebalance protocol leaves the group.

Limitations

While the new consumer rebalance protocol works for most use cases, it is still important to be aware of the following limitations:

  • Client-side assignors are not supported. (see KAFKA-18327)
  • Rack-aware assignment strategies are not fully supported. (see KAFKA-17747)

6.11 - Transaction Protocol

Transaction Protocol

Transaction Protocol

Overview

Starting from Apache Kafka 4.0, Transactions Server Side Defense (KIP-890) brings a strengthened transactional protocol. When enabled and using 4.0 producer clients, the producer epoch is bumped on every transaction to ensure every transaction includes the intended messages and duplicates are not written as part of the next transaction.

The protocol is automatically enabled on the server since Apache Kafka 4.0. Enabling and disabling the protocol is controlled by the transaction.version feature flag. This flag can be set using the storage tool on new cluster creation, or dynamically to an existing cluster via the features tool. Producer clients starting 4.0 and above will use the new transactional protocol as long as it is enabled on the server.

Upgrade & Downgrade

To enable the new protocol on the server, set transaction.version=2. The producer clients do not need to be restarted, and will dynamically upgrade the next time they connect or re-connect to a broker. (Alternatively, the client can be restarted to force this connection). A producer will not upgrade mid-transaction, but on the start of the next transaction after it becomes aware of the server-side upgrade.

Downgrades are safe to perform and work similarly. The older protocol will be used by the clients on the first transaction after the producer becomes aware of the downgraded protocol.

Performance

The new transactional protocol improves performance over verification by only sending a single call to add partitions on the server side, rather than one from the client to add and one from the server to verify.

One consequence of this change is that we can no longer use the hardcoded retry backoff introduced by KAFKA-5477. Due to the asynchronous nature of the endTransaction api, the client can start adding partitions to the next transaction before the markers are written. When this happens, the server will return CONCURRENT_TRANSACTIONS until the previous transaction completes. Rather than the default client backoff for these retries, there was a shorter retry backoff of 20ms.

Now with the server-side request, the server will attempt to retry adding the partition a few times when it sees the CONCURRENT_TRANSACTIONS error before it returns the error to the client. This can result in higher produce latencies reported on these requests. The transaction end to end latency (measured from the time the client begins the transaction to the time to commit) does not increase overall with this change. The time just shifts from client-side backoff to being calculated as part of the produce latency.

The server-side backoff and total retry time can be configured with the following new configs:

  • add.partitions.to.txn.retry.backoff.ms
  • add.partitions.to.txn.retry.backoff.max.ms

6.12 - Eligible Leader Replicas

Eligible Leader Replicas

Eligible Leader Replicas

Overview

Starting from Apache Kafka 4.0, Eligible Leader Replicas (KIP-966 Part 1) is available for the users to an improvement to Kafka replication. As the “strict min ISR” rule has been generally applied, which means the high watermark for the data partition can’t advance if the size of the ISR is smaller than the min ISR(min.insync.replicas), it makes some replicas that are not in the ISR safe to become the leader. The KRaft controller stores such replicas in the PartitionRecord field called Eligible Leader Replicas. During the leader election, the controller will select the leaders with the following order:

  • If ISR is not empty, select one of them.
  • If ELR is not empty, select one that is not fenced.
  • Select the last known leader if it is unfenced. This is a similar behavior prior to the 4.0 when all the replicas are offline.

Upgrade & Downgrade

The ELR is not enabled by default for 4.0. To enable the new protocol on the server, set eligible.leader.replicas.version=1. After that the upgrade, the KRaft controller will start tracking the ELR.

Downgrades are safe to perform by setting eligible.leader.replicas.version=0.

Tool

The ELR fields can be checked through the API DescribeTopicPartitions. The admin client can fetch the ELR info by describing the topics. Also note that, if min.insync.replicas is updated for a topic, the ELR field will be cleaned. If cluster default min ISR is updated, all the ELR fields will be cleaned.

7 - Security

7.1 - Security Overview

Security Overview

Security Overview

The following security measures are currently supported:

  1. Authentication of connections to brokers from clients (producers and consumers), other brokers and tools, using either SSL or SASL. Kafka supports the following SASL mechanisms:
    • SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0
    • SASL/PLAIN - starting at version 0.10.0.0
    • SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0
    • SASL/OAUTHBEARER - starting at version 2.0
  2. Encryption of data transferred between brokers and clients, between brokers, or between brokers and tools using SSL (Note that there is a performance degradation when SSL is enabled, the magnitude of which depends on the CPU type and the JVM implementation.)
  3. Authorization of read / write operations by clients
  4. Authorization is pluggable and integration with external authorization services is supported

It’s worth noting that security is optional - non-secured clusters are supported, as well as a mix of authenticated, unauthenticated, encrypted and non-encrypted clients. The guides below explain how to configure and use the security features in both clients and brokers.

7.2 - Listener Configuration

Listener Configuration

Listener Configuration

In order to secure a Kafka cluster, it is necessary to secure the channels that are used to communicate with the servers. Each server must define the set of listeners that are used to receive requests from clients as well as other servers. Each listener may be configured to authenticate clients using various mechanisms and to ensure traffic between the server and the client is encrypted. This section provides a primer for the configuration of listeners.

Kafka servers support listening for connections on multiple ports. This is configured through the listeners property in the server configuration, which accepts a comma-separated list of the listeners to enable. At least one listener must be defined on each server. The format of each listener defined in listeners is given below:

{LISTENER_NAME}://{hostname}:{port}

The LISTENER_NAME is usually a descriptive name which defines the purpose of the listener. For example, many configurations use a separate listener for client traffic, so they might refer to the corresponding listener as CLIENT in the configuration:

listeners=CLIENT://localhost:9092

The security protocol of each listener is defined in a separate configuration: listener.security.protocol.map. The value is a comma-separated list of each listener mapped to its security protocol. For example, the follow value configuration specifies that the CLIENT listener will use SSL while the BROKER listener will use plaintext.

listener.security.protocol.map=CLIENT:SSL,BROKER:PLAINTEXT

Possible options (case-insensitive) for the security protocol are given below:

  1. PLAINTEXT
  2. SSL
  3. SASL_PLAINTEXT
  4. SASL_SSL

The plaintext protocol provides no security and does not require any additional configuration. In the following sections, this document covers how to configure the remaining protocols.

If each required listener uses a separate security protocol, it is also possible to use the security protocol name as the listener name in listeners. Using the example above, we could skip the definition of the CLIENT and BROKER listeners using the following definition:

listeners=SSL://localhost:9092,PLAINTEXT://localhost:9093

However, we recommend users to provide explicit names for the listeners since it makes the intended usage of each listener clearer.

Among the listeners in this list, it is possible to declare the listener to be used for inter-broker communication by setting the inter.broker.listener.name configuration to the name of the listener. The primary purpose of the inter-broker listener is partition replication. If not defined, then the inter-broker listener is determined by the security protocol defined by security.inter.broker.protocol, which defaults to PLAINTEXT.

In a KRaft cluster, a broker is any server which has the broker role enabled in process.roles and a controller is any server which has the controller role enabled. Listener configuration depends on the role. The listener defined by inter.broker.listener.name is used exclusively for requests between brokers. Controllers, on the other hand, must use separate listener which is defined by the controller.listener.names configuration. This cannot be set to the same value as the inter-broker listener.

Controllers receive requests both from other controllers and from brokers. For this reason, even if a server does not have the controller role enabled (i.e. it is just a broker), it must still define the controller listener along with any security properties that are needed to configure it. For example, we might use the following configuration on a standalone broker:

process.roles=broker
listeners=BROKER://localhost:9092
inter.broker.listener.name=BROKER
controller.quorum.bootstrap.servers=localhost:9093
controller.listener.names=CONTROLLER
listener.security.protocol.map=BROKER:SASL_SSL,CONTROLLER:SASL_SSL

The controller listener is still configured in this example to use the SASL_SSL security protocol, but it is not included in listeners since the broker does not expose the controller listener itself. The port that will be used in this case comes from the controller.quorum.voters configuration, which defines the complete list of controllers.

For KRaft servers which have both the broker and controller role enabled, the configuration is similar. The only difference is that the controller listener must be included in listeners:

process.roles=broker,controller
listeners=BROKER://localhost:9092,CONTROLLER://localhost:9093
inter.broker.listener.name=BROKER
controller.quorum.bootstrap.servers=localhost:9093
controller.listener.names=CONTROLLER
listener.security.protocol.map=BROKER:SASL_SSL,CONTROLLER:SASL_SSL

It is a requirement that the host and port defined in controller.quorum.bootstrap.servers is routed to the exposed controller listeners. For example, here the CONTROLLER listener is bound to localhost:9093. The connection string defined by controller.quorum.bootstrap.servers must then also use localhost:9093, as it does here.

The controller will accept requests on all listeners defined by controller.listener.names. Typically there would be just one controller listener, but it is possible to have more. For example, this provides a way to change the active listener from one port or security protocol to another through a roll of the cluster (one roll to expose the new listener, and one roll to remove the old listener). When multiple controller listeners are defined, the first one in the list will be used for outbound requests.

It is conventional in Kafka to use a separate listener for clients. This allows the inter-cluster listeners to be isolated at the network level. In the case of the controller listener in KRaft, the listener should be isolated since clients do not work with it anyway. Clients are expected to connect to any other listener configured on a broker. Any requests that are bound for the controller will be forwarded as described below

In the following section, this document covers how to enable SSL on a listener for encryption as well as authentication. The subsequent section will then cover additional authentication mechanisms using SASL.

7.3 - Encryption and Authentication using SSL

Encryption and Authentication using SSL

Encryption and Authentication using SSL

Apache Kafka allows clients to use SSL for encryption of traffic as well as authentication. By default, SSL is disabled but can be turned on if needed. The following paragraphs explain in detail how to set up your own PKI infrastructure, use it to create certificates and configure Kafka to use these.

  1. Generate SSL key and certificate for each Kafka broker

The first step of deploying one or more brokers with SSL support is to generate a public/private keypair for every server. Since Kafka expects all keys and certificates to be stored in keystores we will use Java’s keytool command for this task. The tool supports two different keystore formats, the Java specific jks format which has been deprecated by now, as well as PKCS12. PKCS12 is the default format as of Java version 9, to ensure this format is being used regardless of the Java version in use all following commands explicitly specify the PKCS12 format.

    $ keytool -keystore {keystorefile} -alias localhost -validity {validity} -genkey -keyalg RSA -storetype pkcs12

You need to specify two parameters in the above command: 1. keystorefile: the keystore file that stores the keys (and later the certificate) for this broker. The keystore file contains the private and public keys of this broker, therefore it needs to be kept safe. Ideally this step is run on the Kafka broker that the key will be used on, as this key should never be transmitted/leave the server that it is intended for. 2. validity: the valid time of the key in days. Please note that this differs from the validity period for the certificate, which will be determined in Signing the certificate. You can use the same key to request multiple certificates: if your key has a validity of 10 years, but your CA will only sign certificates that are valid for one year, you can use the same key with 10 certificates over time.

To obtain a certificate that can be used with the private key that was just created a certificate signing request needs to be created. This signing request, when signed by a trusted CA results in the actual certificate which can then be installed in the keystore and used for authentication purposes.
To generate certificate signing requests run the following command for all server keystores created so far.

    $ keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}

This command assumes that you want to add hostname information to the certificate, if this is not the case, you can omit the extension parameter -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}. Please see below for more information on this.

Host Name Verification

Host name verification, when enabled, is the process of checking attributes from the certificate that is presented by the server you are connecting to against the actual hostname or ip address of that server to ensure that you are indeed connecting to the correct server.
The main reason for this check is to prevent man-in-the-middle attacks. For Kafka, this check has been disabled by default for a long time, but as of Kafka 2.0.0 host name verification of servers is enabled by default for client connections as well as inter-broker connections.
Server host name verification may be disabled by setting ssl.endpoint.identification.algorithm to an empty string.
For dynamically configured broker listeners, hostname verification may be disabled using kafka-configs.sh:

    $ bin/kafka-configs.sh --bootstrap-server localhost:9093 --entity-type brokers --entity-name 0 --alter --add-config "listener.name.internal.ssl.endpoint.identification.algorithm="

Note:

Normally there is no good reason to disable hostname verification apart from being the quickest way to “just get it to work” followed by the promise to “fix it later when there is more time”!
Getting hostname verification right is not that hard when done at the right time, but gets much harder once the cluster is up and running - do yourself a favor and do it now!

If host name verification is enabled, clients will verify the server’s fully qualified domain name (FQDN) or ip address against one of the following two fields: 1. Common Name (CN) 2. Subject Alternative Name (SAN)

While Kafka checks both fields, usage of the common name field for hostname verification has been deprecated since 2000 and should be avoided if possible. In addition the SAN field is much more flexible, allowing for multiple DNS and IP entries to be declared in a certificate.
Another advantage is that if the SAN field is used for hostname verification the common name can be set to a more meaningful value for authorization purposes. Since we need the SAN field to be contained in the signed certificate, it will be specified when generating the signing request. It can also be specified when generating the keypair, but this will not automatically be copied into the signing request.
To add a SAN field append the following argument -ext SAN=DNS:{FQDN},IP:{IPADDRESS} to the keytool command:

    $ keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}
  1. Creating your own CA

After this step each machine in the cluster has a public/private key pair which can already be used to encrypt traffic and a certificate signing request, which is the basis for creating a certificate. To add authentication capabilities this signing request needs to be signed by a trusted authority, which will be created in this step.

A certificate authority (CA) is responsible for signing certificates. CAs works likes a government that issues passports - the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have a strong assurance that they are connecting to the authentic machines.

For this guide we will be our own Certificate Authority. When setting up a production cluster in a corporate environment these certificates would usually be signed by a corporate CA that is trusted throughout the company. Please see Common Pitfalls in Production for some things to consider for this case.

Due to a bug in OpenSSL, the x509 module will not copy requested extension fields from CSRs into the final certificate. Since we want the SAN extension to be present in our certificate to enable hostname verification, we’ll use the ca module instead. This requires some additional configuration to be in place before we generate our CA keypair.
Save the following listing into a file called openssl-ca.cnf and adjust the values for validity and common attributes as necessary.

    HOME            = .
RANDFILE        = $ENV::HOME/.rnd

####################################################################
[ ca ]
default_ca    = CA_default      # The default ca section

[ CA_default ]

base_dir      = .
certificate   = $base_dir/cacert.pem   # The CA certificate
private_key   = $base_dir/cakey.pem    # The CA private key
new_certs_dir = $base_dir              # Location for new certs after signing
database      = $base_dir/index.txt    # Database index file
serial        = $base_dir/serial.txt   # The current serial number

default_days     = 1000         # How long to certify for
default_crl_days = 30           # How long before next CRL
default_md       = sha256       # Use public key default MD
preserve         = no           # Keep passed DN ordering

x509_extensions = ca_extensions # The extensions to add to the cert

email_in_dn     = no            # Don't concat the email in the DN
copy_extensions = copy          # Required to copy SANs from CSR to cert

####################################################################
[ req ]
default_bits       = 4096
default_keyfile    = cakey.pem
distinguished_name = ca_distinguished_name
x509_extensions    = ca_extensions
string_mask        = utf8only

####################################################################
[ ca_distinguished_name ]
countryName         = Country Name (2 letter code)
countryName_default = DE

stateOrProvinceName         = State or Province Name (full name)
stateOrProvinceName_default = Test Province

localityName                = Locality Name (eg, city)
localityName_default        = Test Town

organizationName            = Organization Name (eg, company)
organizationName_default    = Test Company

organizationalUnitName         = Organizational Unit (eg, division)
organizationalUnitName_default = Test Unit

commonName         = Common Name (e.g. server FQDN or YOUR name)
commonName_default = Test Name

emailAddress         = Email Address
emailAddress_default = test@test.com

####################################################################
[ ca_extensions ]

subjectKeyIdentifier   = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints       = critical, CA:true
keyUsage               = keyCertSign, cRLSign

####################################################################
[ signing_policy ]
countryName            = optional
stateOrProvinceName    = optional
localityName           = optional
organizationName       = optional
organizationalUnitName = optional
commonName             = supplied
emailAddress           = optional

####################################################################
[ signing_req ]
subjectKeyIdentifier   = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints       = CA:FALSE
keyUsage               = digitalSignature, keyEncipherment

Then create a database and serial number file, these will be used to keep track of which certificates were signed with this CA. Both of these are simply text files that reside in the same directory as your CA keys.

    $ echo 01 > serial.txt
$ touch index.txt

With these steps done you are now ready to generate your CA that will be used to sign certificates later.

    $ openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM

The CA is simply a public/private key pair and certificate that is signed by itself, and is only intended to sign other certificates.
This keypair should be kept very safe, if someone gains access to it, they can create and sign certificates that will be trusted by your infrastructure, which means they will be able to impersonate anybody when connecting to any service that trusts this CA.
The next step is to add the generated CA to the clients’ truststore so that the clients can trust this CA:

    $ keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert

Note: If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be “requested” or “required” in the Kafka brokers config then you must provide a truststore for the Kafka brokers as well and it should have all the CA certificates that clients’ keys were signed by.

    $ keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert

In contrast to the keystore in step 1 that stores each machine’s own identity, the truststore of a client stores all the certificates that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way all machines can authenticate all other machines. 3. #### Signing the certificate

Then sign it with the CA:

    $ openssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out {server certificate} -infiles {certificate signing request}

Finally, you need to import both the certificate of the CA and the signed certificate into the keystore:

    $ keytool -keystore {keystore} -alias CARoot -import -file {CA certificate}
$ keytool -keystore {keystore} -alias localhost -import -file cert-signed

The definitions of the parameters are the following: 1. keystore: the location of the keystore 2. CA certificate: the certificate of the CA 3. certificate signing request: the csr created with the server key 4. server certificate: the file to write the signed certificate of the server to This will leave you with one truststore called truststore.jks - this can be the same for all clients and brokers and does not contain any sensitive information, so there is no need to secure this.
Additionally you will have one server.keystore.jks file per node which contains that nodes keys, certificate and your CAs certificate, please refer to Configuring Kafka Brokers and Configuring Kafka Clients for information on how to use these files.

For some tooling assistance on this topic, please check out the easyRSA project which has extensive scripting in place to help with these steps.

SSL key and certificates in PEM format

From 2.7.0 onwards, SSL key and trust stores can be configured for Kafka brokers and clients directly in the configuration in PEM format. This avoids the need to store separate files on the file system and benefits from password protection features of Kafka configuration. PEM may also be used as the store type for file-based key and trust stores in addition to JKS and PKCS12. To configure PEM key store directly in the broker or client configuration, private key in PEM format should be provided in ssl.keystore.key and the certificate chain in PEM format should be provided in ssl.keystore.certificate.chain. To configure trust store, trust certificates, e.g. public certificate of CA, should be provided in ssl.truststore.certificates. Since PEM is typically stored as multi-line base-64 strings, the configuration value can be included in Kafka configuration as multi-line strings with lines terminating in backslash (’') for line continuation.

Store password configs ssl.keystore.password and ssl.truststore.password are not used for PEM. If private key is encrypted using a password, the key password must be provided in ssl.key.password. Private keys may be provided in unencrypted form without a password. In production deployments, configs should be encrypted or externalized using password protection feature in Kafka in this case. Note that the default SSL engine factory has limited capabilities for decryption of encrypted private keys when external tools like OpenSSL are used for encryption. Third party libraries like BouncyCastle may be integrated with a custom SslEngineFactory to support a wider range of encrypted private keys.

  1. Common Pitfalls in Production

The above paragraphs show the process to create your own CA and use it to sign certificates for your cluster. While very useful for sandbox, dev, test, and similar systems, this is usually not the correct process to create certificates for a production cluster in a corporate environment. Enterprises will normally operate their own CA and users can send in CSRs to be signed with this CA, which has the benefit of users not being responsible to keep the CA secure as well as a central authority that everybody can trust. However it also takes away a lot of control over the process of signing certificates from the user. Quite often the persons operating corporate CAs will apply tight restrictions on certificates that can cause issues when trying to use these certificates with Kafka. 1. Extended Key Usage
Certificates may contain an extension field that controls the purpose for which the certificate can be used. If this field is empty, there are no restrictions on the usage, but if any usage is specified in here, valid SSL implementations have to enforce these usages.
Relevant usages for Kafka are: * Client authentication * Server authentication Kafka brokers need both these usages to be allowed, as for intra-cluster communication every broker will behave as both the client and the server towards other brokers. It is not uncommon for corporate CAs to have a signing profile for webservers and use this for Kafka as well, which will only contain the serverAuth usage value and cause the SSL handshake to fail. 2. Intermediate Certificates
Corporate Root CAs are often kept offline for security reasons. To enable day-to-day usage, so called intermediate CAs are created, which are then used to sign the final certificates. When importing a certificate into the keystore that was signed by an intermediate CA it is necessary to provide the entire chain of trust up to the root CA. This can be done by simply cat ing the certificate files into one combined certificate file and then importing this with keytool. 3. Failure to copy extension fields
CA operators are often hesitant to copy and requested extension fields from CSRs and prefer to specify these themselves as this makes it harder for a malicious party to obtain certificates with potentially misleading or fraudulent values. It is advisable to double check signed certificates, whether these contain all requested SAN fields to enable proper hostname verification. The following command can be used to print certificate details to the console, which should be compared with what was originally requested:

            $ openssl x509 -in certificate.crt -text -noout
  1. Configuring Kafka Brokers

If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports will be necessary.

    listeners=PLAINTEXT://host.name:port,SSL://host.name:port

Following SSL configs are needed on the broker side

    ssl.keystore.location=/var/private/ssl/server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/server.truststore.jks
ssl.truststore.password=test1234

Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled. Optional settings that are worth considering: 1. ssl.client.auth=none (“required” => client authentication is required, “requested” => client authentication is requested and client without certs can still connect. The usage of “requested” is discouraged as it provides a false sense of security and misconfigured clients will still connect successfully.) 2. ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. (Default is an empty list) 3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols that you are going to accept from clients. Do note that SSL is deprecated in favor of TLS and using SSL in production is not recommended) 4. ssl.keystore.type=JKS 5. ssl.truststore.type=JKS 6. ssl.secure.random.implementation=SHA1PRNG If you want to enable SSL for inter-broker communication, add the following to the server.properties file (it defaults to PLAINTEXT)

    security.inter.broker.protocol=SSL

Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the JCE Unlimited Strength Jurisdiction Policy Files must be obtained and installed in the JDK/JRE. See the JCA Providers Documentation for more information.

The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the implementation used with the ssl.secure.random.implementation. However, there are performance issues with some implementations (notably, the default chosen on Linux systems, NativePRNG, utilizes a global lock). In cases where performance of SSL connections becomes an issue, consider explicitly setting the implementation to be used. The SHA1PRNG implementation is non-blocking, and has shown very good performance characteristics under heavy load (50 MB/sec of produced messages, plus replication traffic, per-broker).

Once you start the broker you should be able to see in the server.log

    with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)

To check quickly if the server keystore and truststore are setup properly you can run the following command

    $ openssl s_client -debug -connect localhost:9093 -tls1

(Note: TLSv1 should be listed under ssl.enabled.protocols)
In the output of this command you should see server’s certificate:

    -----BEGIN CERTIFICATE-----
{variable sized random bytes}
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Sriharsha Chintalapani
issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com

If the certificate does not show up or if there are any other error messages then your keystore is not setup properly. 6. #### Configuring Kafka Clients

SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported. The configs for SSL will be the same for both producer and consumer.
If client authentication is not required in the broker, then the following is a minimal configuration example:

    security.protocol=SSL
ssl.truststore.location=/var/private/ssl/client.truststore.jks
ssl.truststore.password=test1234

Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled. If client authentication is required, then a keystore must be created like in step 1 and the following must also be configured:

    ssl.keystore.location=/var/private/ssl/client.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234

Other configuration settings that may also be needed depending on our requirements and the broker configuration: 1. ssl.provider (Optional). The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. 2. ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. 3. ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at least one of the protocols configured on the broker side 4. ssl.truststore.type=JKS 5. ssl.keystore.type=JKS

Examples using console-producer and console-consumer:

    $ bin/kafka-console-producer.sh --bootstrap-server localhost:9093 --topic test --producer.config client-ssl.properties
$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties

7.4 - Authentication using SASL

Authentication using SASL

Authentication using SASL

  1. JAAS configuration

Kafka uses the Java Authentication and Authorization Service (JAAS) for SASL configuration.

1. ##### JAAS configuration for Kafka brokers

KafkaServer is the section name in the JAAS file used by each KafkaServer/Broker. This section provides SASL configuration options for the broker including any SASL client connections made by the broker for inter-broker communication. If multiple listeners are configured to use SASL, the section name may be prefixed with the listener name in lower-case followed by a period, e.g. sasl_ssl.KafkaServer.

Brokers may also configure JAAS using the broker configuration property sasl.jaas.config. The property name must be prefixed with the listener prefix including the SASL mechanism, i.e. listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config. Only one login module may be specified in the config value. If multiple mechanisms are configured on a listener, configs must be provided for each mechanism using the listener and mechanism prefix. For example,

            listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
        username="admin" \
        password="admin-secret";
    listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
        username="admin" \
        password="admin-secret" \
        user_admin="admin-secret" \
        user_alice="alice-secret";

If JAAS configuration is defined at different levels, the order of precedence used is: * Broker configuration property listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config * {listenerName}.KafkaServer section of static JAAS configuration * KafkaServer section of static JAAS configuration

See GSSAPI (Kerberos), PLAIN, SCRAM or OAUTHBEARER for example broker configurations.

2. ##### JAAS configuration for Kafka clients

Clients may configure JAAS using the client configuration property sasl.jaas.config or using the static JAAS config file similar to brokers.

  1. ###### JAAS configuration using client configuration property

Clients may specify JAAS configuration as a producer or consumer property without creating a physical configuration file. This mode also enables different producers and consumers within the same JVM to use different credentials by specifying different properties for each client. If both static JAAS configuration system property java.security.auth.login.config and client property sasl.jaas.config are specified, the client property will be used.

See GSSAPI (Kerberos), PLAIN, SCRAM or OAUTHBEARER for example configurations.

  2. ###### JAAS configuration using static config file

To configure SASL authentication on the clients using static JAAS config file: 1. Add a JAAS config file with a client login section named KafkaClient. Configure a login module in KafkaClient for the selected mechanism as described in the examples for setting up GSSAPI (Kerberos), PLAIN, SCRAM or OAUTHBEARER. For example, GSSAPI credentials may be configured as:

                            KafkaClient {
                com.sun.security.auth.module.Krb5LoginModule required
                useKeyTab=true
                storeKey=true
                keyTab="/etc/security/keytabs/kafka_client.keytab"
                principal="kafka-client-1@EXAMPLE.COM";
            };

    2. Pass the JAAS config file location as JVM parameter to each client JVM. For example: 
            
                            -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf
  1. SASL configuration

SASL may be used with PLAINTEXT or SSL as the transport layer using the security protocol SASL_PLAINTEXT or SASL_SSL respectively. If SASL_SSL is used, then SSL must also be configured.

1. ##### SASL mechanisms

Kafka supports the following SASL mechanisms: * GSSAPI (Kerberos) * PLAIN * SCRAM-SHA-256 * SCRAM-SHA-512 * OAUTHBEARER 2. ##### SASL configuration for Kafka brokers

  1. Configure a SASL port in server.properties, by adding at least one of SASL_PLAINTEXT or SASL_SSL to the _listeners_ parameter, which contains one or more comma-separated values: 
        
                    listeners=SASL_PLAINTEXT://host.name:port

If you are only configuring a SASL port (or if you want the Kafka brokers to authenticate each other using SASL) then make sure you set the same SASL protocol for inter-broker communication:

                    security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)

  2. Select one or more supported mechanisms to enable in the broker and follow the steps to configure SASL for the mechanism. To enable multiple mechanisms in the broker, follow the steps here.
3. ##### SASL configuration for Kafka clients

SASL authentication is only supported for the new Java Kafka producer and consumer, the older API is not supported.

To configure SASL authentication on the clients, select a SASL mechanism that is enabled in the broker for client authentication and follow the steps to configure SASL for the selected mechanism.

Note: When establishing connections to brokers via SASL, clients may perform a reverse DNS lookup of the broker address. Due to how the JRE implements reverse DNS lookups, clients may observe slow SASL handshakes if fully qualified domain names are not used, for both the client’s bootstrap.servers and a broker’s advertised.listeners.

  1. Authentication using SASL/Kerberos

1. ##### Prerequisites

  1. **Kerberos**  

If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (Ubuntu, Redhat). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security. 2. Create Kerberos Principals
If you are using the organization’s Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:

                    $ sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
        $ sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"

  3. **Make sure all hosts can be reachable using hostnames** \- it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.
2. ##### Configuring Kafka Brokers

  1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab): 
        
                    KafkaServer {
            com.sun.security.auth.module.Krb5LoginModule required
            useKeyTab=true
            storeKey=true
            keyTab="/etc/security/keytabs/kafka_server.keytab"
            principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
        };

KafkaServer section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It allows the broker to login using the keytab specified in this section. 2. Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see here for more details):

                    -Djava.security.krb5.conf=/etc/kafka/krb5.conf
        -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf

  3. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.
  4. Configure SASL port and SASL mechanisms in server.properties as described here. For example: 
        
                    listeners=SASL_PLAINTEXT://host.name:port
        security.inter.broker.protocol=SASL_PLAINTEXT
        sasl.mechanism.inter.broker.protocol=GSSAPI
        sasl.enabled.mechanisms=GSSAPI

We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is “kafka/kafka1.hostname.com@EXAMPLE.com”, so:

                    sasl.kerberos.service.name=kafka

3. ##### Configuring Kafka Clients

To configure SASL authentication on the clients: 1. Clients (producers, consumers, connect workers, etc) will authenticate to the cluster with their own principal (usually with the same name as the user running the client), so obtain or create these principals as needed. Then configure the JAAS configuration property for each client. Different clients within a JVM may run as different users by specifying different principals. The property sasl.jaas.config in producer.properties or consumer.properties describes how clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client using a keytab (recommended for long-running processes):

                    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
            useKeyTab=true \
            storeKey=true  \
            keyTab="/etc/security/keytabs/kafka_client.keytab" \
            principal="kafka-client-1@EXAMPLE.COM";

For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used along with “useTicketCache=true” as in:

                    sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
            useTicketCache=true;

JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM. 2. Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client. 3. Optionally pass the krb5 file locations as JVM parameters to each client JVM (see here for more details):

                    -Djava.security.krb5.conf=/etc/kafka/krb5.conf

  4. Configure the following properties in producer.properties or consumer.properties: 
        
                    security.protocol=SASL_PLAINTEXT (or SASL_SSL)
        sasl.mechanism=GSSAPI
        sasl.kerberos.service.name=kafka
  1. Authentication using SASL/PLAIN

SASL/PLAIN is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication. Kafka supports a default implementation for SASL/PLAIN which can be extended for production use as described here.

Under the default implementation of principal.builder.class, the username is used as the authenticated Principal for configuration of ACLs etc. 1. ##### Configuring Kafka Brokers

  1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example: 
        
                    KafkaServer {
            org.apache.kafka.common.security.plain.PlainLoginModule required
            username="admin"
            password="admin-secret"
            user_admin="admin-secret"
            user_alice="alice-secret";
        };

This configuration defines two users (admin and alice). The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication. The set of properties user__userName_ defines the passwords for all users that connect to the broker and the broker validates all client connections including those from other brokers using these properties. 2. Pass the JAAS config file location as JVM parameter to each Kafka broker:

                    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf

  3. Configure SASL port and SASL mechanisms in server.properties as described here. For example: 
        
                    listeners=SASL_SSL://host.name:port
        security.inter.broker.protocol=SASL_SSL
        sasl.mechanism.inter.broker.protocol=PLAIN
        sasl.enabled.mechanisms=PLAIN

2. ##### Configuring Kafka Clients

To configure SASL authentication on the clients: 1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the PLAIN mechanism:

                    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
            username="alice" \
            password="alice-secret";

The options username and password are used by clients to configure the user for client connections. In this example, clients connect to the broker as user alice. Different clients within a JVM may connect as different users by specifying different user names and passwords in sasl.jaas.config.

JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

  2. Configure the following properties in producer.properties or consumer.properties: 
        
                    security.protocol=SASL_SSL
        sasl.mechanism=PLAIN

3. ##### Use of SASL/PLAIN in production

   * SASL/PLAIN should be used only with SSL as transport layer to ensure that clear passwords are not transmitted on the wire without encryption.
   * The default implementation of SASL/PLAIN in Kafka specifies usernames and passwords in the JAAS configuration file as shown here. From Kafka version 2.0 onwards, you can avoid storing clear passwords on disk by configuring your own callback handlers that obtain username and password from an external source using the configuration options `sasl.server.callback.handler.class` and `sasl.client.callback.handler.class`.
   * In production systems, external authentication servers may implement password authentication. From Kafka version 2.0 onwards, you can plug in your own callback handlers that use external authentication servers for password verification by configuring `sasl.server.callback.handler.class`.
  1. Authentication using SASL/SCRAM

Salted Challenge Response Authentication Mechanism (SCRAM) is a family of SASL mechanisms that addresses the security concerns with traditional mechanisms that perform username/password authentication like PLAIN and DIGEST-MD5. The mechanism is defined in RFC 5802. Kafka supports SCRAM-SHA-256 and SCRAM-SHA-512 which can be used with TLS to perform secure authentication. Under the default implementation of principal.builder.class, the username is used as the authenticated Principal for configuration of ACLs etc. The default SCRAM implementation in Kafka stores SCRAM credentials in the metadata log. Refer to Security Considerations for more details.

1. ##### Creating SCRAM Credentials

The SCRAM implementation in Kafka uses the metadata log as credential store. Credentials can be created in the metadata log using kafka-storage.sh or kafka-configs.sh. For each SCRAM mechanism enabled, credentials must be created by adding a config with the mechanism name. Credentials for inter-broker communication must be created before Kafka brokers are started. kafka-storage.sh can format storage with initial credentials. Client credentials may be created and updated dynamically and updated credentials will be used to authenticate new connections. kafka-configs.sh can be used to create and update credentials after Kafka brokers are started.

Create initial SCRAM credentials for user admin with password admin-secret :

            $ bin/kafka-storage.sh format -t $(bin/kafka-storage.sh random-uuid) -c config/server.properties --add-scram 'SCRAM-SHA-256=[name="admin",password="admin-secret"]'

Create SCRAM credentials for user alice with password alice-secret (refer to Configuring Kafka Clients for client configuration):

            $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret]' --entity-type users --entity-name alice --command-config client.properties

The default iteration count of 4096 is used if iterations are not specified. A random salt is created if it’s not specified. The SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in the metadata log. See RFC 5802 for details on SCRAM identity and the individual fields.

Existing credentials may be listed using the --describe option:

            $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type users --entity-name alice --command-config client.properties

Credentials may be deleted for one or more SCRAM mechanisms using the --alter –delete-config option:

            $ bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --delete-config 'SCRAM-SHA-256' --entity-type users --entity-name alice --command-config client.properties

2. ##### Configuring Kafka Brokers

  1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example: 
        
                    KafkaServer {
            org.apache.kafka.common.security.scram.ScramLoginModule required
            username="admin"
            password="admin-secret";
        };

The properties username and password in the KafkaServer section are used by the broker to initiate connections to other brokers. In this example, admin is the user for inter-broker communication. 2. Pass the JAAS config file location as JVM parameter to each Kafka broker:

                    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf

  3. Configure SASL port and SASL mechanisms in server.properties as described here. For example: 
        
                    listeners=SASL_SSL://host.name:port
        security.inter.broker.protocol=SASL_SSL
        sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 (or SCRAM-SHA-512)
        sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)

3. ##### Configuring Kafka Clients

To configure SASL authentication on the clients: 1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the SCRAM mechanisms:

                    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
            username="alice" \
            password="alice-secret";

The options username and password are used by clients to configure the user for client connections. In this example, clients connect to the broker as user alice. Different clients within a JVM may connect as different users by specifying different user names and passwords in sasl.jaas.config.

JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

  2. Configure the following properties in producer.properties or consumer.properties: 
        
                    security.protocol=SASL_SSL
        sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)

4. ##### Security Considerations for SASL/SCRAM

   * The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in the metadata log. This is suitable for production use in installations where KRaft controllers are secure and on a private network.
   * Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count of 4096. Strong hash functions combined with strong passwords and high iteration counts protect against brute force attacks if KRaft controllers security is compromised.
   * SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This protects against dictionary or brute force attacks and against impersonation if KRaft controllers security is compromised.
   * From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers by configuring `sasl.server.callback.handler.class` in installations where KRaft controllers are not secure.
   * For more details on security considerations, refer to [RFC 5802](https://tools.ietf.org/html/rfc5802#section-9).
  1. Authentication using SASL/OAUTHBEARER

The OAuth 2 Authorization Framework “enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf.” The SASL OAUTHBEARER mechanism enables the use of the framework in a SASL (i.e. a non-HTTP) context; it is defined in RFC 7628. The default OAUTHBEARER implementation in Kafka creates and validates Unsecured JSON Web Tokens and is only suitable for use in non-production Kafka installations. Refer to Security Considerations for more details.

Under the default implementation of principal.builder.class, the principalName of OAuthBearerToken is used as the authenticated Principal for configuration of ACLs etc. 1. ##### Configuring Kafka Brokers

  1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example: 
        
                    KafkaServer {
            org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
            unsecuredLoginStringClaim_sub="admin";
        };

The property unsecuredLoginStringClaim_sub in the KafkaServer section is used by the broker when it initiates connections to other brokers. In this example, admin will appear in the subject (sub) claim and will be the user for inter-broker communication. 2. Pass the JAAS config file location as JVM parameter to each Kafka broker:

                    -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf

  3. Configure SASL port and SASL mechanisms in server.properties as described here. For example: 
        
                    listeners=SASL_SSL://host.name:port (or SASL_PLAINTEXT if non-production)
        security.inter.broker.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
        sasl.mechanism.inter.broker.protocol=OAUTHBEARER
        sasl.enabled.mechanisms=OAUTHBEARER

2. ##### Configuring Kafka Clients

To configure SASL authentication on the clients: 1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the OAUTHBEARER mechanisms:

                    sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
            unsecuredLoginStringClaim_sub="alice";

The option unsecuredLoginStringClaim_sub is used by clients to configure the subject (sub) claim, which determines the user for client connections. In this example, clients connect to the broker as user alice. Different clients within a JVM may connect as different users by specifying different subject (sub) claims in sasl.jaas.config.

JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

  2. Configure the following properties in producer.properties or consumer.properties: 
        
                    security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
        sasl.mechanism=OAUTHBEARER

  3. The default implementation of SASL/OAUTHBEARER depends on the jackson-databind library. Since it's an optional dependency, users have to configure it as a dependency via their build tool.
3. ##### Unsecured Token Creation Options for SASL/OAUTHBEARER

   * The default implementation of SASL/OAUTHBEARER in Kafka creates and validates [Unsecured JSON Web Tokens](https://tools.ietf.org/html/rfc7515#appendix-A.5). While suitable only for non-production use, it does provide the flexibility to create arbitrary tokens in a DEV or TEST environment.
   * Here are the various supported JAAS module options on the client side (and on the broker side if OAUTHBEARER is the inter-broker protocol):  JAAS Module Option for Unsecured Token Creation | Documentation  

—|—
unsecuredLoginStringClaim_<claimname>="value" | Creates a String claim with the given name and value. Any valid claim name can be specified except ‘iat’ and ‘exp’ (these are automatically generated).
unsecuredLoginNumberClaim_<claimname>="value" | Creates a Number claim with the given name and value. Any valid claim name can be specified except ‘iat’ and ‘exp’ (these are automatically generated).
unsecuredLoginListClaim_<claimname>="value" | Creates a String List claim with the given name and values parsed from the given value where the first character is taken as the delimiter. For example: unsecuredLoginListClaim_fubar="|value1|value2". Any valid claim name can be specified except ‘iat’ and ‘exp’ (these are automatically generated).
unsecuredLoginExtension_<extensionname>="value" | Creates a String extension with the given name and value. For example: unsecuredLoginExtension_traceId="123". A valid extension name is any sequence of lowercase or uppercase alphabet characters. In addition, the “auth” extension name is reserved. A valid extension value is any combination of characters with ASCII codes 1-127.
unsecuredLoginPrincipalClaimName | Set to a custom claim name if you wish the name of the String claim holding the principal name to be something other than ‘sub’.
unsecuredLoginLifetimeSeconds | Set to an integer value if the token expiration is to be set to something other than the default value of 3600 seconds (which is 1 hour). The ‘exp’ claim will be set to reflect the expiration time.
unsecuredLoginScopeClaimName | Set to a custom claim name if you wish the name of the String or String List claim holding any token scope to be something other than ‘scope’.
4. ##### Unsecured Token Validation Options for SASL/OAUTHBEARER

   * Here are the various supported JAAS module options on the broker side for [Unsecured JSON Web Token](https://tools.ietf.org/html/rfc7515#appendix-A.5) validation:  JAAS Module Option for Unsecured Token Validation | Documentation  

—|—
unsecuredValidatorPrincipalClaimName="value" | Set to a non-empty value if you wish a particular String claim holding a principal name to be checked for existence; the default is to check for the existence of the ‘sub’ claim.
unsecuredValidatorScopeClaimName="value" | Set to a custom claim name if you wish the name of the String or String List claim holding any token scope to be something other than ‘scope’.
unsecuredValidatorRequiredScope="value" | Set to a space-delimited list of scope values if you wish the String/String List claim holding the token scope to be checked to make sure it contains certain values.
unsecuredValidatorAllowableClockSkewMs="value" | Set to a positive integer value if you wish to allow up to some number of positive milliseconds of clock skew (the default is 0).
* The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments) using custom login and SASL Server callback handlers. * For more details on security considerations, refer to RFC 6749, Section 10. 5. ##### Token Refresh for SASL/OAUTHBEARER

Kafka periodically refreshes any token before it expires so that the client can continue to make connections to brokers. The parameters that impact how the refresh algorithm operates are specified as part of the producer/consumer/broker configuration and are as follows. See the documentation for these properties elsewhere for details. The default values are usually reasonable, in which case these configuration parameters would not need to be explicitly set. Producer/Consumer/Broker Configuration Property

sasl.login.refresh.window.factor
sasl.login.refresh.window.jitter
sasl.login.refresh.min.period.seconds
sasl.login.refresh.min.buffer.seconds
6. ##### Secure/Production Use of SASL/OAUTHBEARER

Production use cases will require writing an implementation of org.apache.kafka.common.security.auth.AuthenticateCallbackHandler that can handle an instance of org.apache.kafka.common.security.oauthbearer.OAuthBearerTokenCallback and declaring it via either the sasl.login.callback.handler.class configuration option for a non-broker client or via the listener.name.sasl_ssl.oauthbearer.sasl.login.callback.handler.class configuration option for brokers (when SASL/OAUTHBEARER is the inter-broker protocol).

Production use cases will also require writing an implementation of org.apache.kafka.common.security.auth.AuthenticateCallbackHandler that can handle an instance of org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback and declaring it via the listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class broker configuration option. 7. ##### Security Considerations for SASL/OAUTHBEARER

   * The default implementation of SASL/OAUTHBEARER in Kafka creates and validates [Unsecured JSON Web Tokens](https://tools.ietf.org/html/rfc7515#appendix-A.5). This is suitable only for non-production use.
   * OAUTHBEARER should be used in production environments only with TLS-encryption to prevent interception of tokens.
   * The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments) using custom login and SASL Server callback handlers as described above.
   * For more details on OAuth 2 security considerations in general, refer to [RFC 6749, Section 10](https://tools.ietf.org/html/rfc6749#section-10).
  1. Enabling multiple SASL mechanisms in a broker

1. Specify configuration for the login modules of all enabled mechanisms in the `KafkaServer` section of the JAAS config file. For example: 
    
            KafkaServer {
        com.sun.security.auth.module.Krb5LoginModule required
        useKeyTab=true
        storeKey=true
        keyTab="/etc/security/keytabs/kafka_server.keytab"
        principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
    
        org.apache.kafka.common.security.plain.PlainLoginModule required
        username="admin"
        password="admin-secret"
        user_admin="admin-secret"
        user_alice="alice-secret";
    };

2. Enable the SASL mechanisms in server.properties: 
    
            sasl.enabled.mechanisms=GSSAPI,PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER

3. Specify the SASL security protocol and mechanism for inter-broker communication in server.properties if required: 
    
            security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
    sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechanisms)

4. Follow the mechanism-specific steps in GSSAPI (Kerberos), PLAIN, SCRAM and OAUTHBEARER to configure SASL for the enabled mechanisms.
  1. Modifying SASL mechanism in a Running Cluster

SASL mechanism can be modified in a running cluster using the following sequence:

1. Enable new SASL mechanism by adding the mechanism to `sasl.enabled.mechanisms` in server.properties for each broker. Update JAAS config file to include both mechanisms as described here. Incrementally bounce the cluster nodes.
2. Restart clients using the new mechanism.
3. To change the mechanism of inter-broker communication (if this is required), set `sasl.mechanism.inter.broker.protocol` in server.properties to the new mechanism and incrementally bounce the cluster again.
4. To remove old mechanism (if this is required), remove the old mechanism from `sasl.enabled.mechanisms` in server.properties and remove the entries for the old mechanism from JAAS config file. Incrementally bounce the cluster again.
  1. Authentication using Delegation Tokens

Delegation token based authentication is a lightweight authentication mechanism to complement existing SASL/SSL methods. Delegation tokens are shared secrets between kafka brokers and clients. Delegation tokens will help processing frameworks to distribute the workload to available workers in a secure environment without the added cost of distributing Kerberos TGT/keytabs or keystores when 2-way SSL is used. See KIP-48 for more details.

Under the default implementation of principal.builder.class, the owner of delegation token is used as the authenticated Principal for configuration of ACLs etc.

Typical steps for delegation token usage are:

1. User authenticates with the Kafka cluster via SASL or SSL, and obtains a delegation token. This can be done using Admin APIs or using `kafka-delegation-tokens.sh` script.
2. User securely passes the delegation token to Kafka clients for authenticating with the Kafka cluster.
3. Token owner/renewer can renew/expire the delegation tokens.
1. ##### Token Management

A secret is used to generate and verify delegation tokens. This is supplied using config option delegation.token.secret.key. The same secret key must be configured across all the brokers. The controllers must also be configured with the secret using the same config option. If the secret is not set or set to empty string, delegation token authentication and API operations will fail.

The token details are stored with the other metadata on the controller nodes and delegation tokens are suitable for use when the controllers are on a private network or when all communications between brokers and controllers is encrypted. Currently, this secret is stored as plain text in the server.properties config file. We intend to make these configurable in a future Kafka release.

A token has a current life, and a maximum renewable life. By default, tokens must be renewed once every 24 hours for up to 7 days. These can be configured using delegation.token.expiry.time.ms and delegation.token.max.lifetime.ms config options.

Tokens can also be cancelled explicitly. If a token is not renewed by the token’s expiration time or if token is beyond the max life time, it will be deleted from all broker caches.

2. ##### Creating Delegation Tokens

Tokens can be created by using Admin APIs or using kafka-delegation-tokens.sh script. Delegation token requests (create/renew/expire/describe) should be issued only on SASL or SSL authenticated channels. Tokens can not be requests if the initial authentication is done through delegation token. A token can be created by the user for that user or others as well by specifying the --owner-principal parameter. Owner/Renewers can renew or expire tokens. Owner/renewers can always describe their own tokens. To describe other tokens, a DESCRIBE_TOKEN permission needs to be added on the User resource representing the owner of the token. kafka-delegation-tokens.sh script examples are given below.

Create a delegation token:

            $ bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create   --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1

Create a delegation token for a different owner:

            $ bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create   --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1 --owner-principal User:owner1

Renew a delegation token:

            $ bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --renew    --renew-time-period -1 --command-config client.properties --hmac ABCDEFGHIJK

Expire a delegation token:

            $ bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --expire   --expiry-time-period -1   --command-config client.properties  --hmac ABCDEFGHIJK

Existing tokens can be described using the –describe option:

            $ bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --describe --command-config client.properties  --owner-principal User:user1

3. ##### Token Authentication

Delegation token authentication piggybacks on the current SASL/SCRAM authentication mechanism. We must enable SASL/SCRAM mechanism on Kafka cluster as described in here.

Configuring Kafka Clients:

  1. Configure the JAAS configuration property for each client in producer.properties or consumer.properties. The login module describes how the clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client for the token authentication: 
        
                    sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
            username="tokenID123" \
            password="lAYYSFmLs4bTjf+lTZ1LCHR/ZZFNA==" \
            tokenauth="true";

The options username and password are used by clients to configure the token id and token HMAC. And the option tokenauth is used to indicate the server about token authentication. In this example, clients connect to the broker using token id: tokenID123. Different clients within a JVM may connect using different tokens by specifying different token details in sasl.jaas.config.

JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

4. ##### Procedure to manually rotate the secret:

We require a re-deployment when the secret needs to be rotated. During this process, already connected clients will continue to work. But any new connection requests and renew/expire requests with old tokens can fail. Steps are given below.

  1. Expire all existing tokens.
  2. Rotate the secret by rolling upgrade, and
  3. Generate new tokens

We intend to automate this in a future Kafka release.

7.5 - Authorization and ACLs

Authorization and ACLs

Authorization and ACLs

Kafka ships with a pluggable authorization framework, which is configured with the authorizer.class.name property in the server configuration. Configured implementations must extend org.apache.kafka.server.authorizer.Authorizer. Kafka provides a default implementation which store ACLs in the cluster metadata (KRaft metadata log). For KRaft clusters, use the following configuration on all nodes (brokers, controllers, or combined broker/controller nodes):

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer

Kafka ACLs are defined in the general format of “Principal {P} is [Allowed|Denied] Operation {O} From Host {H} on any Resource {R} matching ResourcePattern {RP}”. You can read more about the ACL structure in KIP-11 and resource patterns in KIP-290. In order to add, remove, or list ACLs, you can use the Kafka ACL CLI kafka-acls.sh. By default, if no ResourcePatterns match a specific Resource R, then R has no associated ACLs, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties.

allow.everyone.if.no.acl.found=true

One can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string “User” is case sensitive.

super.users=User:Bob;User:Alice

KRaft Principal Forwarding

In KRaft clusters, admin requests such as CreateTopics and DeleteTopics are sent to the broker listeners by the client. The broker then forwards the request to the active controller through the first listener configured in controller.listener.names. Authorization of these requests is done on the controller node. This is achieved by way of an Envelope request which packages both the underlying request from the client as well as the client principal. When the controller receives the forwarded Envelope request from the broker, it first authorizes the Envelope request using the authenticated broker principal. Then it authorizes the underlying request using the forwarded principal.
All of this implies that Kafka must understand how to serialize and deserialize the client principal. The authentication framework allows for customized principals by overriding the principal.builder.class configuration. In order for customized principals to work with KRaft, the configured class must implement org.apache.kafka.common.security.auth.KafkaPrincipalSerde so that Kafka knows how to serialize and deserialize the principals. The default implementation org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder uses the Kafka RPC format defined in the source code: clients/src/main/resources/common/message/DefaultPrincipalData.json.

Customizing SSL User Name

By default, the SSL user name will be of the form “CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown”. One can change that by setting ssl.principal.mapping.rules to a customized rule in server.properties. This config allows a list of rules for mapping X.500 distinguished name to short name. The rules are evaluated in order and the first rule that matches a distinguished name is used to map it to a short name. Any later rules in the list are ignored.
The format of ssl.principal.mapping.rules is a list where each rule starts with “RULE:” and contains an expression as the following formats. Default rule will return string representation of the X.500 certificate distinguished name. If the distinguished name matches the pattern, then the replacement command will be run over the name. This also supports lowercase/uppercase options, to force the translated result to be all lower/uppercase case. This is done by adding a “/L” or “/U’ to the end of the rule.

RULE:pattern/replacement/
RULE:pattern/replacement/[LU]

Example ssl.principal.mapping.rules values are:

RULE:^CN=(.*?),OU=ServiceUsers.*$/$1/,
RULE:^CN=(.*?),OU=(.*?),O=(.*?),L=(.*?),ST=(.*?),C=(.*?)$/$1@$2/L,
RULE:^.*[Cc][Nn]=([a-zA-Z0-9.]*).*$/$1/L,
DEFAULT

Above rules translate distinguished name “CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown” to “serviceuser” and “CN=adminUser,OU=Admin,O=Unknown,L=Unknown,ST=Unknown,C=Unknown” to “adminuser@admin”.
For advanced use cases, one can customize the name by setting a customized PrincipalBuilder in server.properties like the following.

principal.builder.class=CustomizedPrincipalBuilderClass

Customizing SASL User Name

By default, the SASL user name will be the primary part of the Kerberos principal. One can change that by setting sasl.kerberos.principal.to.local.rules to a customized rule in server.properties. The format of sasl.kerberos.principal.to.local.rules is a list where each rule works in the same way as the auth_to_local in Kerberos configuration file (krb5.conf). This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a “/L” or “/U” to the end of the rule. check below formats for syntax. Each rules starts with RULE: and contains an expression as the following formats. See the kerberos documentation for more details.

RULE:[n:string](regexp)s/pattern/replacement/
RULE:[n:string](regexp)s/pattern/replacement/g
RULE:[n:string](regexp)s/pattern/replacement//L
RULE:[n:string](regexp)s/pattern/replacement/g/L
RULE:[n:string](regexp)s/pattern/replacement//U
RULE:[n:string](regexp)s/pattern/replacement/g/U

An example of adding a rule to properly translate user@MYDOMAIN.COM to user while also keeping the default rule in place is:

sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT

Command Line Interface

Kafka Authorization management CLI can be found under bin directory with all the other CLIs. The CLI script is called kafka-acls.sh. Following lists all the options that the script supports:

OptionDescriptionDefaultOption type
--addIndicates to the script that user is trying to add an acl.Action
--removeIndicates to the script that user is trying to remove an acl.Action
--listIndicates to the script that user is trying to list acls.Action
--bootstrap-serverA list of host/port pairs to use for establishing the connection to the Kafka cluster broker. Only one of –bootstrap-server or –bootstrap-controller option must be specified.Configuration
--bootstrap-controllerA list of host/port pairs to use for establishing the connection to the Kafka cluster controller. Only one of –bootstrap-server or –bootstrap-controller option must be specified.Configuration
--command-configA property file containing configs to be passed to Admin Client. This option can only be used with –bootstrap-server option.Configuration
--clusterIndicates to the script that the user is trying to interact with acls on the singular cluster resource.ResourcePattern
--topic [topic-name]Indicates to the script that the user is trying to interact with acls on topic resource pattern(s).ResourcePattern
--group [group-name]Indicates to the script that the user is trying to interact with acls on consumer-group resource pattern(s)ResourcePattern
--transactional-id [transactional-id]The transactionalId to which ACLs should be added or removed. A value of * indicates the ACLs should apply to all transactionalIds.ResourcePattern
--delegation-token [delegation-token]Delegation token to which ACLs should be added or removed. A value of * indicates ACL should apply to all tokens.ResourcePattern
--user-principal [user-principal]A user resource to which ACLs should be added or removed. This is currently supported in relation with delegation tokens. A value of * indicates ACL should apply to all users.ResourcePattern
--resource-pattern-type [pattern-type]Indicates to the script the type of resource pattern, (for –add), or resource pattern filter, (for –list and –remove), the user wishes to use.
When adding acls, this should be a specific pattern type, e.g. ’literal’ or ‘prefixed’.
When listing or removing acls, a specific pattern type filter can be used to list or remove acls from a specific type of resource pattern, or the filter values of ‘any’ or ‘match’ can be used, where ‘any’ will match any pattern type, but will match the resource name exactly, and ‘match’ will perform pattern matching to list or remove all acls that affect the supplied resource(s).
WARNING: ‘match’, when used in combination with the ‘–remove’ switch, should be used with care.literalConfiguration
--allow-principalPrincipal is in PrincipalType:name format that will be added to ACL with Allow permission. Default PrincipalType string “User” is case sensitive.
You can specify multiple –allow-principal in a single command.Principal
--deny-principalPrincipal is in PrincipalType:name format that will be added to ACL with Deny permission. Default PrincipalType string “User” is case sensitive.
You can specify multiple –deny-principal in a single command.Principal
--principalPrincipal is in PrincipalType:name format that will be used along with –list option. Default PrincipalType string “User” is case sensitive. This will list the ACLs for the specified principal.
You can specify multiple –principal in a single command.Principal
--allow-hostIP address from which principals listed in –allow-principal will have access.if –allow-principal is specified defaults to * which translates to “all hosts”Host
--deny-hostIP address from which principals listed in –deny-principal will be denied access.if –deny-principal is specified defaults to * which translates to “all hosts”Host
--operationOperation that will be allowed or denied.
Valid values are:
  • Read
  • Write
  • Create
  • Delete
  • Alter
  • Describe
  • ClusterAction
  • DescribeConfigs
  • AlterConfigs
  • IdempotentWrite
  • CreateTokens
  • DescribeTokens
  • All

| All | Operation
--producer | Convenience option to add/remove acls for producer role. This will generate acls that allows WRITE, DESCRIBE and CREATE on topic. | | Convenience
--consumer | Convenience option to add/remove acls for consumer role. This will generate acls that allows READ, DESCRIBE on topic and READ on consumer-group. | | Convenience
--idempotent | Enable idempotence for the producer. This should be used in combination with the –producer option.
Note that idempotence is enabled automatically if the producer is authorized to a particular transactional-id. | | Convenience
--force | Convenience option to assume yes to all queries and do not prompt. | | Convenience

Examples

  • Adding Acls
    Suppose you want to add an acl “Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1”. You can do that by executing the CLI with following options:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic
    

By default, all principals that don’t have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the –deny-principal and –deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:'*' --allow-host '*' --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic

Note that --allow-host and --deny-host only support IP addresses (hostnames are not supported). Above examples add acls to a topic by specifying –topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying –cluster and to a consumer group by specifying –group [group-name]. You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl “Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0” You can do that by using the wildcard resource ‘*’, e.g. by executing the CLI with following options:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Peter --allow-host 198.51.200.1 --producer --topic '*'

You can add acls on prefixed resource patterns, e.g. suppose you want to add an acl “Principal User:Jane is allowed to produce to any Topic whose name starts with ‘Test-’ from any host”. You can do that by executing the CLI with following options:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Jane --producer --topic Test- --resource-pattern-type prefixed

Note, –resource-pattern-type defaults to ’literal’, which only affects resources with the exact same name or, in the case of the wildcard resource name ‘*’, a resource with any name.

  • Removing Acls
    Removing acls is pretty much the same. The only difference is instead of –add option users will have to specify –remove option. To remove the acls added by the first example above we can execute the CLI with following options:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic 
    

If you want to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --remove --allow-principal User:Jane --producer --topic Test- --resource-pattern-type Prefixed
  • List Acls
    We can list acls for any resource by specifying the –list option with the resource. To list all acls on the literal resource pattern Test-topic, we can execute the CLI with following options:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic Test-topic
    

However, this will only return the acls that have been added to this exact resource pattern. Other acls can exist that affect access to the topic, e.g. any acls on the topic wildcard ‘*’, or any acls on prefixed resource patterns. Acls on the wildcard resource pattern can be queried explicitly:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic '*'

However, it is not necessarily possible to explicitly query for acls on prefixed resource patterns that match Test-topic as the name of such patterns may not be known. We can list all acls affecting Test-topic by using ‘–resource-pattern-type match’, e.g.

    > bin/kafka-acls.sh --bootstrap-server localhost:9092 --list --topic Test-topic --resource-pattern-type match

This will list acls on all matching literal, wildcard and prefixed resource patterns.

  • Adding or removing a principal as producer or consumer
    The most common use case for acl management are adding/removing a principal as producer or consumer so we added convenience options to handle these cases. In order to add User:Bob as a producer of Test-topic we can execute the following command:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Bob --producer --topic Test-topic
    

Similarly to add Alice as a consumer of Test-topic with consumer group Group-1 we just have to pass –consumer option:

    $ bin/kafka-acls.sh --bootstrap-server localhost:9092 --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1 

Note that for consumer option we must also specify the consumer group. In order to remove a principal from producer or consumer role we just need to pass –remove option.

Authorization Primitives

Protocol calls are usually performing some operations on certain resources in Kafka. It is required to know the operations and resources to set up effective protection. In this section we’ll list these operations and resources, then list the combination of these with the protocols to see the valid scenarios.

Operations in Kafka

There are a few operation primitives that can be used to build up privileges. These can be matched up with certain resources to allow specific protocol calls for a given user. These are:

  • Read
  • Write
  • Create
  • Delete
  • Alter
  • Describe
  • ClusterAction
  • DescribeConfigs
  • AlterConfigs
  • IdempotentWrite
  • CreateTokens
  • DescribeTokens
  • All

Resources in Kafka

The operations above can be applied on certain resources which are described below.

  • Topic: this simply represents a Topic. All protocol calls that are acting on topics (such as reading, writing them) require the corresponding privilege to be added. If there is an authorization error with a topic resource, then a TOPIC_AUTHORIZATION_FAILED (error code: 29) will be returned.
  • Group: this represents the consumer groups in the brokers. All protocol calls that are working with consumer groups, like joining a group must have privileges with the group in subject. If the privilege is not given then a GROUP_AUTHORIZATION_FAILED (error code: 30) will be returned in the protocol response.
  • Cluster: this resource represents the cluster. Operations that are affecting the whole cluster, like controlled shutdown are protected by privileges on the Cluster resource. If there is an authorization problem on a cluster resource, then a CLUSTER_AUTHORIZATION_FAILED (error code: 31) will be returned.
  • TransactionalId: this resource represents actions related to transactions, such as committing. If any error occurs, then a TRANSACTIONAL_ID_AUTHORIZATION_FAILED (error code: 53) will be returned by brokers.
  • DelegationToken: this represents the delegation tokens in the cluster. Actions, such as describing delegation tokens could be protected by a privilege on the DelegationToken resource. Since these objects have a little special behavior in Kafka it is recommended to read KIP-48 and the related upstream documentation at Authentication using Delegation Tokens.
  • User: CreateToken and DescribeToken operations can be granted to User resources to allow creating and describing tokens for other users. More info can be found in KIP-373.

Operations and Resources on Protocols

In the below table we’ll list the valid operations on resources that are executed by the Kafka API protocols.

Protocol (API key)OperationResourceNote
PRODUCE (0)WriteTransactionalIdA transactional producer which has its transactional.id set requires this privilege.
PRODUCE (0)IdempotentWriteClusterAn idempotent produce action requires this privilege.
PRODUCE (0)WriteTopicThis applies to a normal produce action.
FETCH (1)ClusterActionClusterA follower must have ClusterAction on the Cluster resource in order to fetch partition data.
FETCH (1)ReadTopicRegular Kafka consumers need READ permission on each partition they are fetching.
LIST_OFFSETS (2)DescribeTopic
METADATA (3)DescribeTopic
METADATA (3)CreateClusterIf topic auto-creation is enabled, then the broker-side API will check for the existence of a Cluster level privilege. If it’s found then it’ll allow creating the topic, otherwise it’ll iterate through the Topic level privileges (see the next one).
METADATA (3)CreateTopicThis authorizes auto topic creation if enabled but the given user doesn’t have a cluster level permission (above).
LEADER_AND_ISR (4)ClusterActionCluster
STOP_REPLICA (5)ClusterActionCluster
UPDATE_METADATA (6)ClusterActionCluster
CONTROLLED_SHUTDOWN (7)ClusterActionCluster
OFFSET_COMMIT (8)ReadGroupAn offset can only be committed if it’s authorized to the given group and the topic too (see below). Group access is checked first, then Topic access.
OFFSET_COMMIT (8)ReadTopicSince offset commit is part of the consuming process, it needs privileges for the read action.
OFFSET_FETCH (9)DescribeGroupSimilarly to OFFSET_COMMIT, the application must have privileges on group and topic level too to be able to fetch. However in this case it requires describe access instead of read. Group access is checked first, then Topic access.
OFFSET_FETCH (9)DescribeTopic
FIND_COORDINATOR (10)DescribeGroupThe FIND_COORDINATOR request can be of “Group” type in which case it is looking for consumergroup coordinators. This privilege would represent the Group mode.
FIND_COORDINATOR (10)DescribeTransactionalIdThis applies only on transactional producers and checked when a producer tries to find the transaction coordinator.
JOIN_GROUP (11)ReadGroup
HEARTBEAT (12)ReadGroup
LEAVE_GROUP (13)ReadGroup
SYNC_GROUP (14)ReadGroup
DESCRIBE_GROUPS (15)DescribeGroup
LIST_GROUPS (16)DescribeClusterWhen the broker checks to authorize a list_groups request it first checks for this cluster level authorization. If none found then it proceeds to check the groups individually. This operation doesn’t return CLUSTER_AUTHORIZATION_FAILED.
LIST_GROUPS (16)DescribeGroupIf none of the groups are authorized, then just an empty response will be sent back instead of an error. This operation doesn’t return CLUSTER_AUTHORIZATION_FAILED. This is applicable from the 2.1 release.
SASL_HANDSHAKE (17)The SASL handshake is part of the authentication process and therefore it’s not possible to apply any kind of authorization here.
API_VERSIONS (18)The API_VERSIONS request is part of the Kafka protocol handshake and happens on connection and before any authentication. Therefore it’s not possible to control this with authorization.
CREATE_TOPICS (19)CreateClusterIf there is no cluster level authorization then it won’t return CLUSTER_AUTHORIZATION_FAILED but fall back to use topic level, which is just below. That’ll throw error if there is a problem.
CREATE_TOPICS (19)CreateTopicThis is applicable from the 2.0 release.
DELETE_TOPICS (20)DeleteTopic
DELETE_RECORDS (21)DeleteTopic
INIT_PRODUCER_ID (22)WriteTransactionalId
INIT_PRODUCER_ID (22)IdempotentWriteCluster
OFFSET_FOR_LEADER_EPOCH (23)ClusterActionClusterIf there is no cluster level privilege for this operation, then it’ll check for topic level one.
OFFSET_FOR_LEADER_EPOCH (23)DescribeTopicThis is applicable from the 2.1 release.
ADD_PARTITIONS_TO_TXN (24)WriteTransactionalIdThis API is only applicable to transactional requests. It first checks for the Write action on the TransactionalId resource, then it checks the Topic in subject (below).
ADD_PARTITIONS_TO_TXN (24)WriteTopic
ADD_OFFSETS_TO_TXN (25)WriteTransactionalIdSimilarly to ADD_PARTITIONS_TO_TXN this is only applicable to transactional request. It first checks for Write action on the TransactionalId resource, then it checks whether it can Read on the given group (below).
ADD_OFFSETS_TO_TXN (25)ReadGroup
END_TXN (26)WriteTransactionalId
WRITE_TXN_MARKERS (27)AlterCluster
WRITE_TXN_MARKERS (27)ClusterActionCluster
TXN_OFFSET_COMMIT (28)WriteTransactionalId
TXN_OFFSET_COMMIT (28)ReadGroup
TXN_OFFSET_COMMIT (28)ReadTopic
DESCRIBE_ACLS (29)DescribeCluster
CREATE_ACLS (30)AlterCluster
DELETE_ACLS (31)AlterCluster
DESCRIBE_CONFIGS (32)DescribeConfigsClusterIf broker configs are requested, then the broker will check cluster level privileges.
DESCRIBE_CONFIGS (32)DescribeConfigsTopicIf topic configs are requested, then the broker will check topic level privileges.
ALTER_CONFIGS (33)AlterConfigsClusterIf broker configs are altered, then the broker will check cluster level privileges.
ALTER_CONFIGS (33)AlterConfigsTopicIf topic configs are altered, then the broker will check topic level privileges.
ALTER_REPLICA_LOG_DIRS (34)AlterCluster
DESCRIBE_LOG_DIRS (35)DescribeClusterAn empty response will be returned on authorization failure.
SASL_AUTHENTICATE (36)SASL_AUTHENTICATE is part of the authentication process and therefore it’s not possible to apply any kind of authorization here.
CREATE_PARTITIONS (37)AlterTopic
CREATE_DELEGATION_TOKEN (38)Creating delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
CREATE_DELEGATION_TOKEN (38)CreateTokensUserAllows creating delegation tokens for the User resource.
RENEW_DELEGATION_TOKEN (39)Renewing delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
EXPIRE_DELEGATION_TOKEN (40)Expiring delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
DESCRIBE_DELEGATION_TOKEN (41)DescribeDelegationTokenDescribing delegation tokens has special rules, for this please see the Authentication using Delegation Tokens section.
DESCRIBE_DELEGATION_TOKEN (41)DescribeTokensUserAllows describing delegation tokens of the User resource.
DELETE_GROUPS (42)DeleteGroup
ELECT_PREFERRED_LEADERS (43)ClusterActionCluster
INCREMENTAL_ALTER_CONFIGS (44)AlterConfigsClusterIf broker configs are altered, then the broker will check cluster level privileges.
INCREMENTAL_ALTER_CONFIGS (44)AlterConfigsTopicIf topic configs are altered, then the broker will check topic level privileges.
ALTER_PARTITION_REASSIGNMENTS (45)AlterCluster
LIST_PARTITION_REASSIGNMENTS (46)DescribeCluster
OFFSET_DELETE (47)DeleteGroup
OFFSET_DELETE (47)ReadTopic
DESCRIBE_CLIENT_QUOTAS (48)DescribeConfigsCluster
ALTER_CLIENT_QUOTAS (49)AlterConfigsCluster
DESCRIBE_USER_SCRAM_CREDENTIALS (50)DescribeCluster
ALTER_USER_SCRAM_CREDENTIALS (51)AlterCluster
VOTE (52)ClusterActionCluster
BEGIN_QUORUM_EPOCH (53)ClusterActionCluster
END_QUORUM_EPOCH (54)ClusterActionCluster
DESCRIBE_QUORUM (55)DescribeCluster
ALTER_PARTITION (56)ClusterActionCluster
UPDATE_FEATURES (57)AlterCluster
ENVELOPE (58)ClusterActionCluster
FETCH_SNAPSHOT (59)ClusterActionCluster
DESCRIBE_CLUSTER (60)DescribeCluster
DESCRIBE_PRODUCERS (61)ReadTopic
BROKER_REGISTRATION (62)ClusterActionCluster
BROKER_HEARTBEAT (63)ClusterActionCluster
UNREGISTER_BROKER (64)AlterCluster
DESCRIBE_TRANSACTIONS (65)DescribeTransactionalId
LIST_TRANSACTIONS (66)DescribeTransactionalId
ALLOCATE_PRODUCER_IDS (67)ClusterActionCluster
CONSUMER_GROUP_HEARTBEAT (68)ReadGroup
CONSUMER_GROUP_DESCRIBE (69)ReadGroup
CONTROLLER_REGISTRATION (70)ClusterActionCluster
GET_TELEMETRY_SUBSCRIPTIONS (71)No authorization check is performed for this request.
PUSH_TELEMETRY (72)No authorization check is performed for this request.
ASSIGN_REPLICAS_TO_DIRS (73)ClusterActionCluster
LIST_CLIENT_METRICS_RESOURCES (74)DescribeConfigsCluster
DESCRIBE_TOPIC_PARTITIONS (75)DescribeTopic
SHARE_GROUP_HEARTBEAT (76)ReadGroup
SHARE_GROUP_DESCRIBE (77)DescribeGroup
SHARE_FETCH (78)ReadGroup
SHARE_FETCH (78)ReadTopic
SHARE_ACKNOWLEDGE (79)ReadGroup
SHARE_ACKNOWLEDGE (79)ReadTopic
INITIALIZE_SHARE_GROUP_STATE (83)ClusterActionCluster
READ_SHARE_GROUP_STATE (84)ClusterActionCluster
WRITE_SHARE_GROUP_STATE (85)ClusterActionCluster
DELETE_SHARE_GROUP_STATE (86)ClusterActionCluster
READ_SHARE_GROUP_STATE_SUMMARY (87)ClusterActionCluster

7.6 - Incorporating Security Features in a Running Cluster

Incorporating Security Features in a Running Cluster

Incorporating Security Features in a Running Cluster

You can secure a running cluster via one or more of the supported protocols discussed previously. This is done in phases:

  • Incrementally bounce the cluster nodes to open additional secured port(s).
  • Restart clients using the secured rather than PLAINTEXT port (assuming you are securing the client-broker connection).
  • Incrementally bounce the cluster again to enable broker-to-broker security (if this is required)
  • A final incremental bounce to close the PLAINTEXT port.

The specific steps for configuring SSL and SASL are described in sections 7.3 and 7.4. Follow these steps to enable security for your desired protocol(s).

The security implementation lets you configure different protocols for both broker-client and broker-broker communication. These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout so brokers and/or clients can continue to communicate.

When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It’s also good practice to wait for restarted replicas to return to the ISR list before moving onto the next node.

As an example, say we wish to encrypt both broker-client and broker-broker communication with SSL. In the first incremental bounce, an SSL port is opened on each node:

listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092

We then restart the clients, changing their config to point at the newly opened, secured port:

bootstrap.servers = [broker1:9092,...]
security.protocol = SSL
...etc

In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker protocol (which will use the same SSL port):

listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
security.inter.broker.protocol=SSL

In the final bounce we secure the cluster by closing the PLAINTEXT port:

listeners=SSL://broker1:9092
security.inter.broker.protocol=SSL

Alternatively we might choose to open multiple ports so that different protocols can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption throughout (i.e. for broker-broker and broker-client communication) but we’d like to add SASL authentication to the broker-client connection also. We would achieve this by opening two additional ports during the first bounce:

listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093

We would then restart the clients, changing their config to point at the newly opened, SASL & SSL secured port:

bootstrap.servers = [broker1:9093,...]
security.protocol = SASL_SSL
...etc

The second server bounce would switch the cluster to use encrypted broker-broker communication via the SSL port we previously opened on port 9092:

listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
security.inter.broker.protocol=SSL

The final bounce secures the cluster by closing the PLAINTEXT port.

listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
security.inter.broker.protocol=SSL

8 - Kafka Connect

8.1 - Overview

Overview

Overview

Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other systems. It makes it simple to quickly define connectors that move large collections of data into and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency. An export job can deliver data from Kafka topics into secondary storage and query systems or into batch systems for offline analysis.

Kafka Connect features include:

  • A common framework for Kafka connectors - Kafka Connect standardizes integration of other data systems with Kafka, simplifying connector development, deployment, and management
  • Distributed and standalone modes - scale up to a large, centrally managed service supporting an entire organization or scale down to development, testing, and small production deployments
  • REST interface - submit and manage connectors to your Kafka Connect cluster via an easy to use REST API
  • Automatic offset management - with just a little information from connectors, Kafka Connect can manage the offset commit process automatically so connector developers do not need to worry about this error prone part of connector development
  • Distributed and scalable by default - Kafka Connect builds on the existing group management protocol. More workers can be added to scale up a Kafka Connect cluster.
  • Streaming/batch integration - leveraging Kafka’s existing capabilities, Kafka Connect is an ideal solution for bridging streaming and batch data systems

8.2 - User Guide

User Guide

User Guide

The quickstart provides a brief example of how to run a standalone version of Kafka Connect. This section describes how to configure, run, and manage Kafka Connect in more detail.

Running Kafka Connect

Kafka Connect currently supports two modes of execution: standalone (single process) and distributed.

In standalone mode all work is performed in a single process. This configuration is simpler to setup and get started with and may be useful in situations where only one worker makes sense (e.g. collecting log files), but it does not benefit from some of the features of Kafka Connect such as fault tolerance. You can start a standalone process with the following command:

$ bin/connect-standalone.sh config/connect-standalone.properties [connector1.properties connector2.json …]

The first parameter is the configuration for the worker. This includes settings such as the Kafka connection parameters, serialization format, and how frequently to commit offsets. The provided example should work well with a local cluster running with the default configuration provided by config/server.properties. It will require tweaking to use with a different configuration or production deployment. All workers (both standalone and distributed) require a few configs:

  • bootstrap.servers - List of Kafka servers used to bootstrap connections to Kafka
  • key.converter - Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
  • value.converter - Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
  • plugin.path (default empty) - a list of paths that contain Connect plugins (connectors, converters, transformations). Before running quick starts, users must add the absolute path that contains the example FileStreamSourceConnector and FileStreamSinkConnector packaged in connect-file-"version".jar, because these connectors are not included by default to the CLASSPATH or the plugin.path of the Connect worker (see plugin.path property for examples).

The important configuration options specific to standalone mode are:

  • offset.storage.file.filename - File to store source connector offsets

The parameters that are configured here are intended for producers and consumers used by Kafka Connect to access the configuration, offset and status topics. For configuration of the producers used by Kafka source tasks and the consumers used by Kafka sink tasks, the same parameters can be used but need to be prefixed with producer. and consumer. respectively. The only Kafka client parameter that is inherited without a prefix from the worker configuration is bootstrap.servers, which in most cases will be sufficient, since the same cluster is often used for all purposes. A notable exception is a secured cluster, which requires extra parameters to allow connections. These parameters will need to be set up to three times in the worker configuration, once for management access, once for Kafka sources and once for Kafka sinks.

Starting with 2.3.0, client configuration overrides can be configured individually per connector by using the prefixes producer.override. and consumer.override. for Kafka sources or Kafka sinks respectively. These overrides are included with the rest of the connector’s configuration properties.

The remaining parameters are connector configuration files. Each file may either be a Java Properties file or a JSON file containing an object with the same structure as the request body of either the POST /connectors endpoint or the PUT /connectors/{name}/config endpoint (see the OpenAPI documentation). You may include as many as you want, but all will execute within the same process (on different threads). You can also choose not to specify any connector configuration files on the command line, and instead use the REST API to create connectors at runtime after your standalone worker starts.

Distributed mode handles automatic balancing of work, allows you to scale up (or down) dynamically, and offers fault tolerance both in the active tasks and for configuration and offset commit data. Execution is very similar to standalone mode:

$ bin/connect-distributed.sh config/connect-distributed.properties

The difference is in the class which is started and the configuration parameters which change how the Kafka Connect process decides where to store configurations, how to assign work, and where to store offsets and task statues. In the distributed mode, Kafka Connect stores the offsets, configs and task statuses in Kafka topics. It is recommended to manually create the topics for offset, configs and statuses in order to achieve the desired the number of partitions and replication factors. If the topics are not yet created when starting Kafka Connect, the topics will be auto created with default number of partitions and replication factor, which may not be best suited for its usage.

In particular, the following configuration parameters, in addition to the common settings mentioned above, are critical to set before starting your cluster:

  • group.id (default connect-cluster) - unique name for the cluster, used in forming the Connect cluster group; note that this must not conflict with consumer group IDs
  • config.storage.topic (default connect-configs) - topic to use for storing connector and task configurations; note that this should be a single partition, highly replicated, compacted topic. You may need to manually create the topic to ensure the correct configuration as auto created topics may have multiple partitions or be automatically configured for deletion rather than compaction
  • offset.storage.topic (default connect-offsets) - topic to use for storing offsets; this topic should have many partitions, be replicated, and be configured for compaction
  • status.storage.topic (default connect-status) - topic to use for storing statuses; this topic can have multiple partitions, and should be replicated and configured for compaction

Note that in distributed mode the connector configurations are not passed on the command line. Instead, use the REST API described below to create, modify, and destroy connectors.

Configuring Connectors

Connector configurations are simple key-value mappings. In both standalone and distributed mode, they are included in the JSON payload for the REST request that creates (or modifies) the connector. In standalone mode these can also be defined in a properties file and passed to the Connect process on the command line.

Most configurations are connector dependent, so they can’t be outlined here. However, there are a few common options:

  • name - Unique name for the connector. Attempting to register again with the same name will fail.
  • connector.class - The Java class for the connector
  • tasks.max - The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot achieve this level of parallelism.
  • key.converter - (optional) Override the default key converter set by the worker.
  • value.converter - (optional) Override the default value converter set by the worker.

The connector.class config supports several formats: the full name or alias of the class for this connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name or use FileStreamSink or FileStreamSinkConnector to make the configuration a bit shorter.

Sink connectors also have a few additional options to control their input. Each sink connector must set one of the following:

  • topics - A comma-separated list of topics to use as input for this connector
  • topics.regex - A Java regular expression of topics to use as input for this connector

For any other options, you should consult the documentation for the connector.

Transformations

Connectors can be configured with transformations to make lightweight message-at-a-time modifications. They can be convenient for data massaging and event routing.

A transformation chain can be specified in the connector configuration.

  • transforms - List of aliases for the transformation, specifying the order in which the transformations will be applied.
  • transforms.$alias.type - Fully qualified class name for the transformation.
  • transforms.$alias.$transformationSpecificConfig Configuration properties for the transformation

For example, lets take the built-in file source connector and use a transformation to add a static field.

Throughout the example we’ll use schemaless JSON data format. To use schemaless format, we changed the following two lines in connect-standalone.properties from true to false:

key.converter.schemas.enable
value.converter.schemas.enable

The file source connector reads each line as a String. We will wrap each line in a Map and then add a second field to identify the origin of the event. To do this, we use two transformations:

  • HoistField to place the input line inside a Map
  • InsertField to add the static field. In this example we’ll indicate that the record came from a file connector

After adding the transformations, connect-file-source.properties file looks as following:

name=local-file-source
connector.class=FileStreamSource
tasks.max=1
file=test.txt
topic=connect-test
transforms=MakeMap, InsertSource
transforms.MakeMap.type=org.apache.kafka.connect.transforms.HoistField$Value
transforms.MakeMap.field=line
transforms.InsertSource.type=org.apache.kafka.connect.transforms.InsertField$Value
transforms.InsertSource.static.field=data_source
transforms.InsertSource.static.value=test-file-source

All the lines starting with transforms were added for the transformations. You can see the two transformations we created: “InsertSource” and “MakeMap” are aliases that we chose to give the transformations. The transformation types are based on the list of built-in transformations you can see below. Each transformation type has additional configuration: HoistField requires a configuration called “field”, which is the name of the field in the map that will include the original String from the file. InsertField transformation lets us specify the field name and the value that we are adding.

When we ran the file source connector on my sample file without the transformations, and then read them using kafka-console-consumer.sh, the results were:

"foo"
"bar"
"hello world"

We then create a new file connector, this time after adding the transformations to the configuration file. This time, the results will be:

{"line":"foo","data_source":"test-file-source"}
{"line":"bar","data_source":"test-file-source"}
{"line":"hello world","data_source":"test-file-source"}

You can see that the lines we’ve read are now part of a JSON map, and there is an extra field with the static value we specified. This is just one example of what you can do with transformations.

Included transformations

Several widely-applicable data and routing transformations are included with Kafka Connect:

  • Cast - Cast fields or the entire key or value to a specific type
  • DropHeaders - Remove headers by name
  • ExtractField - Extract a specific field from Struct and Map and include only this field in results
  • Filter - Removes messages from all further processing. This is used with a predicate to selectively filter certain messages
  • Flatten - Flatten a nested data structure
  • HeaderFrom - Copy or move fields in the key or value to the record headers
  • HoistField - Wrap the entire event as a single field inside a Struct or a Map
  • InsertField - Add a field using either static data or record metadata
  • InsertHeader - Add a header using static data
  • MaskField - Replace field with valid null value for the type (0, empty string, etc) or custom replacement (non-empty string or numeric value only)
  • RegexRouter - modify the topic of a record based on original topic, replacement string and a regular expression
  • ReplaceField - Filter or rename fields
  • SetSchemaMetadata - modify the schema name or version
  • TimestampConverter - Convert timestamps between different formats
  • TimestampRouter - Modify the topic of a record based on original topic and timestamp. Useful when using a sink that needs to write to different tables or indexes based on timestamps
  • ValueToKey - Replace the record key with a new key formed from a subset of fields in the record value

Details on how to configure each transformation are listed below:

org.apache.kafka.connect.transforms.Cast
Cast fields or the entire key or value to a specific type, e.g. to force an integer field to a smaller width. Cast from integers, floats, boolean and string to any other type, and cast binary to string (base64 encoded).

Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.Cast$Key) or value (org.apache.kafka.connect.transforms.Cast$Value).

  • spec

    List of fields and the type to cast them to of the form field1:type,field2:type to cast fields of Maps or Structs. A single type to cast the entire value. Valid types are int8, int16, int32, int64, float32, float64, boolean, and string. Note that binary fields can only be cast to string.

    Type:list
    Default:
    Valid Values:list of colon-delimited pairs, e.g. foo:bar,abc:xyz
    Importance:high
  • replace.null.with.default

    Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
org.apache.kafka.connect.transforms.DropHeaders
Removes one or more headers from each record.

  • headers

    The name of the headers to be removed.

    Type:list
    Default:
    Valid Values:non-empty list
    Importance:high
org.apache.kafka.connect.transforms.ExtractField
Extract the specified field from a Struct when schema present, or a Map in the case of schemaless data. Any null values are passed through unmodified.

Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.ExtractField$Key) or value (org.apache.kafka.connect.transforms.ExtractField$Value).

  • field

    Field name to extract.

    Type:string
    Default:
    Valid Values:
    Importance:medium
  • field.syntax.version

    Defines the version of the syntax to access fields. If set to `V1`, then the field paths are limited to access the elements at the root level of the struct or map. If set to `V2`, the syntax will support accessing nested elements. To access nested elements, dotted notation is used. If dots are already included in the field name, then backtick pairs can be used to wrap field names containing dots. E.g. to access the subfield `baz` from a field named "foo.bar" in a struct/map the following format can be used to access its elements: "`foo.bar`.baz".

    Type:string
    Default:V1
    Valid Values:(case insensitive) [V1, V2]
    Importance:high
  • replace.null.with.default

    Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

    Type:boolean
    Default:true
    Valid Values:
    Importance:medium
org.apache.kafka.connect.transforms.Filter
Drops all records, filtering them from subsequent transformations in the chain. This is intended to be used conditionally to filter out records matching (or not matching) a particular Predicate.

    org.apache.kafka.connect.transforms.Flatten
    Flatten a nested data structure, generating names for each field by concatenating the field names at each level with a configurable delimiter character. Applies to Struct when schema present, or a Map in the case of schemaless data. Array fields and their contents are not modified. The default delimiter is '.'.

    Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.Flatten$Key) or value (org.apache.kafka.connect.transforms.Flatten$Value).

    • delimiter

      Delimiter to insert between field names from the input record when generating field names for the output record

      Type:string
      Default:.
      Valid Values:
      Importance:medium
    org.apache.kafka.connect.transforms.HeaderFrom
    Moves or copies fields in the key/value of a record into that record's headers. Corresponding elements of fields and headers together identify a field and the header it should be moved or copied to. Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.HeaderFrom$Key) or value (org.apache.kafka.connect.transforms.HeaderFrom$Value).

    • fields

      Field names in the record whose values are to be copied or moved to headers.

      Type:list
      Default:
      Valid Values:non-empty list
      Importance:high
    • headers

      Header names, in the same order as the field names listed in the fields configuration property.

      Type:list
      Default:
      Valid Values:non-empty list
      Importance:high
    • operation

      Either move if the fields are to be moved to the headers (removed from the key/value), or copy if the fields are to be copied to the headers (retained in the key/value).

      Type:string
      Default:
      Valid Values:[move, copy]
      Importance:high
    • replace.null.with.default

      Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
    org.apache.kafka.connect.transforms.HoistField
    Wrap data using the specified field name in a Struct when schema present, or a Map in the case of schemaless data.

    Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.HoistField$Key) or value (org.apache.kafka.connect.transforms.HoistField$Value).

    • field

      Field name for the single field that will be created in the resulting Struct or Map.

      Type:string
      Default:
      Valid Values:
      Importance:medium
    org.apache.kafka.connect.transforms.InsertField
    Insert field(s) using attributes from the record metadata or a configured static value.

    Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.InsertField$Key) or value (org.apache.kafka.connect.transforms.InsertField$Value).

    • offset.field

      Field name for Kafka offset - only applicable to sink connectors.
      Suffix with ! to make this a required field, or ? to keep it optional (the default).

      Type:string
      Default:null
      Valid Values:
      Importance:medium
    • partition.field

      Field name for Kafka partition. Suffix with ! to make this a required field, or ? to keep it optional (the default).

      Type:string
      Default:null
      Valid Values:
      Importance:medium
    • replace.null.with.default

      Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
    • static.field

      Field name for static data field. Suffix with ! to make this a required field, or ? to keep it optional (the default).

      Type:string
      Default:null
      Valid Values:
      Importance:medium
    • static.value

      Static field value, if field name configured.

      Type:string
      Default:null
      Valid Values:
      Importance:medium
    • timestamp.field

      Field name for record timestamp. Suffix with ! to make this a required field, or ? to keep it optional (the default).

      Type:string
      Default:null
      Valid Values:
      Importance:medium
    • topic.field

      Field name for Kafka topic. Suffix with ! to make this a required field, or ? to keep it optional (the default).

      Type:string
      Default:null
      Valid Values:
      Importance:medium
    org.apache.kafka.connect.transforms.InsertHeader
    Add a header to each record.

    • header

      The name of the header.

      Type:string
      Default:
      Valid Values:non-null string
      Importance:high
    • value.literal

      The literal value that is to be set as the header value on all records.

      Type:string
      Default:
      Valid Values:non-null string
      Importance:high
    org.apache.kafka.connect.transforms.MaskField
    Mask specified fields with a valid null value for the field type (i.e. 0, false, empty string, and so on).

    For numeric and string fields, an optional replacement value can be specified that is converted to the correct type.

    Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.MaskField$Key) or value (org.apache.kafka.connect.transforms.MaskField$Value).

    • fields

      Names of fields to mask.

      Type:list
      Default:
      Valid Values:non-empty list
      Importance:high
    • replace.null.with.default

      Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
    • replacement

      Custom value replacement, that will be applied to all 'fields' values (numeric or non-empty string values only).

      Type:string
      Default:null
      Valid Values:non-empty string
      Importance:low
    org.apache.kafka.connect.transforms.RegexRouter
    Update the record topic using the configured regular expression and replacement string.

    Under the hood, the regex is compiled to a java.util.regex.Pattern. If the pattern matches the input topic, java.util.regex.Matcher#replaceFirst() is used with the replacement string to obtain the new topic.

    • regex

      Regular expression to use for matching.

      Type:string
      Default:
      Valid Values:valid regex
      Importance:high
    • replacement

      Replacement string.

      Type:string
      Default:
      Valid Values:
      Importance:high
    org.apache.kafka.connect.transforms.ReplaceField
    Filter or rename fields.

    Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.ReplaceField$Key) or value (org.apache.kafka.connect.transforms.ReplaceField$Value).

    • exclude

      Fields to exclude. This takes precedence over the fields to include.

      Type:list
      Default:""
      Valid Values:
      Importance:medium
    • include

      Fields to include. If specified, only these fields will be used.

      Type:list
      Default:""
      Valid Values:
      Importance:medium
    • renames

      Field rename mappings.

      Type:list
      Default:""
      Valid Values:list of colon-delimited pairs, e.g. foo:bar,abc:xyz
      Importance:medium
    • replace.null.with.default

      Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
    org.apache.kafka.connect.transforms.SetSchemaMetadata
    Set the schema name, version or both on the record's key (org.apache.kafka.connect.transforms.SetSchemaMetadata$Key) or value (org.apache.kafka.connect.transforms.SetSchemaMetadata$Value) schema.

    • schema.name

      Schema name to set.

      Type:string
      Default:null
      Valid Values:
      Importance:high
    • schema.version

      Schema version to set.

      Type:int
      Default:null
      Valid Values:
      Importance:high
    • replace.null.with.default

      Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
    org.apache.kafka.connect.transforms.TimestampConverter
    Convert timestamps between different formats such as Unix epoch, strings, and Connect Date/Timestamp types.Applies to individual fields or to the entire value.

    Use the concrete transformation type designed for the record key (org.apache.kafka.connect.transforms.TimestampConverter$Key) or value (org.apache.kafka.connect.transforms.TimestampConverter$Value).

    • target.type

      The desired timestamp representation: string, unix, Date, Time, or Timestamp

      Type:string
      Default:
      Valid Values:[string, unix, Date, Time, Timestamp]
      Importance:high
    • field

      The field containing the timestamp, or empty if the entire value is a timestamp

      Type:string
      Default:""
      Valid Values:
      Importance:high
    • format

      A SimpleDateFormat-compatible format for the timestamp. Used to generate the output when type=string or used to parse the input if the input is a string.

      Type:string
      Default:""
      Valid Values:
      Importance:medium
    • replace.null.with.default

      Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium
    • unix.precision

      The desired Unix precision for the timestamp: seconds, milliseconds, microseconds, or nanoseconds. Used to generate the output when type=unix or used to parse the input if the input is a Long.Note: This SMT will cause precision loss during conversions from, and to, values with sub-millisecond components.

      Type:string
      Default:milliseconds
      Valid Values:[nanoseconds, microseconds, milliseconds, seconds]
      Importance:low
    org.apache.kafka.connect.transforms.TimestampRouter
    Update the record's topic field as a function of the original topic value and the record timestamp.

    This is mainly useful for sink connectors, since the topic field is often used to determine the equivalent entity name in the destination system(e.g. database table or search index name).

    • timestamp.format

      Format string for the timestamp that is compatible with java.text.SimpleDateFormat.

      Type:string
      Default:yyyyMMdd
      Valid Values:
      Importance:high
    • topic.format

      Format string which can contain ${topic} and ${timestamp} as placeholders for the topic and timestamp, respectively.

      Type:string
      Default:${topic}-${timestamp}
      Valid Values:
      Importance:high
    org.apache.kafka.connect.transforms.ValueToKey
    Replace the record key with a new key formed from a subset of fields in the record value.

    • fields

      Field names on the record value to extract as the record key.

      Type:list
      Default:
      Valid Values:non-empty list
      Importance:high
    • replace.null.with.default

      Whether to replace fields that have a default value and that are null to the default value. When set to true, the default value is used, otherwise null is used.

      Type:boolean
      Default:true
      Valid Values:
      Importance:medium

    Predicates

    Transformations can be configured with predicates so that the transformation is applied only to messages which satisfy some condition. In particular, when combined with the Filter transformation predicates can be used to selectively filter out certain messages.

    Predicates are specified in the connector configuration.

    • predicates - Set of aliases for the predicates to be applied to some of the transformations.
    • predicates.$alias.type - Fully qualified class name for the predicate.
    • predicates.$alias.$predicateSpecificConfig - Configuration properties for the predicate.

    All transformations have the implicit config properties predicate and negate. A predicular predicate is associated with a transformation by setting the transformation’s predicate config to the predicate’s alias. The predicate’s value can be reversed using the negate configuration property.

    For example, suppose you have a source connector which produces messages to many different topics and you want to:

    • filter out the messages in the ‘foo’ topic entirely
    • apply the ExtractField transformation with the field name ‘other_field’ to records in all topics except the topic ‘bar’

    To do this we need first to filter out the records destined for the topic ‘foo’. The Filter transformation removes records from further processing, and can use the TopicNameMatches predicate to apply the transformation only to records in topics which match a certain regular expression. TopicNameMatches’s only configuration property is pattern which is a Java regular expression for matching against the topic name. The configuration would look like this:

    transforms=Filter
    transforms.Filter.type=org.apache.kafka.connect.transforms.Filter
    transforms.Filter.predicate=IsFoo
    
    predicates=IsFoo
    predicates.IsFoo.type=org.apache.kafka.connect.transforms.predicates.TopicNameMatches
    predicates.IsFoo.pattern=foo
    

    Next we need to apply ExtractField only when the topic name of the record is not ‘bar’. We can’t just use TopicNameMatches directly, because that would apply the transformation to matching topic names, not topic names which do not match. The transformation’s implicit negate config properties allows us to invert the set of records which a predicate matches. Adding the configuration for this to the previous example we arrive at:

    transforms=Filter,Extract
    transforms.Filter.type=org.apache.kafka.connect.transforms.Filter
    transforms.Filter.predicate=IsFoo
    
    transforms.Extract.type=org.apache.kafka.connect.transforms.ExtractField$Key
    transforms.Extract.field=other_field
    transforms.Extract.predicate=IsBar
    transforms.Extract.negate=true
    
    predicates=IsFoo,IsBar
    predicates.IsFoo.type=org.apache.kafka.connect.transforms.predicates.TopicNameMatches
    predicates.IsFoo.pattern=foo
    
    predicates.IsBar.type=org.apache.kafka.connect.transforms.predicates.TopicNameMatches
    predicates.IsBar.pattern=bar
    

    Kafka Connect includes the following predicates:

    • TopicNameMatches - matches records in a topic with a name matching a particular Java regular expression.
    • HasHeaderKey - matches records which have a header with the given key.
    • RecordIsTombstone - matches tombstone records, that is records with a null value.

    Details on how to configure each predicate are listed below:

    org.apache.kafka.connect.transforms.predicates.HasHeaderKey
    A predicate which is true for records with at least one header with the configured name.

    • name

      The header name.

      Type:string
      Default:
      Valid Values:non-empty string
      Importance:medium
    org.apache.kafka.connect.transforms.predicates.RecordIsTombstone
    A predicate which is true for records which are tombstones (i.e. have null value).

      org.apache.kafka.connect.transforms.predicates.TopicNameMatches
      A predicate which is true for records with a topic name that matches the configured regular expression.

      • pattern

        A Java regular expression for matching against the name of a record's topic.

        Type:string
        Default:
        Valid Values:non-empty string, valid regex
        Importance:medium

      REST API

      Since Kafka Connect is intended to be run as a service, it also provides a REST API for managing connectors. This REST API is available in both standalone and distributed mode. The REST API server can be configured using the listeners configuration option. This field should contain a list of listeners in the following format: protocol://host:port,protocol2://host2:port2. Currently supported protocols are http and https. For example:

      listeners=http://localhost:8080,https://localhost:8443
      

      By default, if no listeners are specified, the REST server runs on port 8083 using the HTTP protocol. When using HTTPS, the configuration has to include the SSL configuration. By default, it will use the ssl.* settings. In case it is needed to use different configuration for the REST API than for connecting to Kafka brokers, the fields can be prefixed with listeners.https. When using the prefix, only the prefixed options will be used and the ssl.* options without the prefix will be ignored. Following fields can be used to configure HTTPS for the REST API:

      • ssl.keystore.location
      • ssl.keystore.password
      • ssl.keystore.type
      • ssl.key.password
      • ssl.truststore.location
      • ssl.truststore.password
      • ssl.truststore.type
      • ssl.enabled.protocols
      • ssl.provider
      • ssl.protocol
      • ssl.cipher.suites
      • ssl.keymanager.algorithm
      • ssl.secure.random.implementation
      • ssl.trustmanager.algorithm
      • ssl.endpoint.identification.algorithm
      • ssl.client.auth

      The REST API is used not only by users to monitor / manage Kafka Connect. In distributed mode, it is also used for the Kafka Connect cross-cluster communication. Some requests received on the follower nodes REST API will be forwarded to the leader node REST API. In case the URI under which is given host reachable is different from the URI which it listens on, the configuration options rest.advertised.host.name, rest.advertised.port and rest.advertised.listener can be used to change the URI which will be used by the follower nodes to connect with the leader. When using both HTTP and HTTPS listeners, the rest.advertised.listener option can be also used to define which listener will be used for the cross-cluster communication. When using HTTPS for communication between nodes, the same ssl.* or listeners.https options will be used to configure the HTTPS client.

      The following are the currently supported REST API endpoints:

      • GET /connectors - return a list of active connectors
      • POST /connectors - create a new connector; the request body should be a JSON object containing a string name field and an object config field with the connector configuration parameters. The JSON object may also optionally contain a string initial_state field which can take the following values - STOPPED, PAUSED or RUNNING (the default value)
      • GET /connectors/{name} - get information about a specific connector
      • GET /connectors/{name}/config - get the configuration parameters for a specific connector
      • PUT /connectors/{name}/config - update the configuration parameters for a specific connector
      • PATCH /connectors/{name}/config - patch the configuration parameters for a specific connector, where null values in the JSON body indicates removing of the key from the final configuration
      • GET /connectors/{name}/status - get current status of the connector, including if it is running, failed, paused, etc., which worker it is assigned to, error information if it has failed, and the state of all its tasks
      • GET /connectors/{name}/tasks - get a list of tasks currently running for a connector along with their configurations
      • GET /connectors/{name}/tasks/{taskid}/status - get current status of the task, including if it is running, failed, paused, etc., which worker it is assigned to, and error information if it has failed
      • PUT /connectors/{name}/pause - pause the connector and its tasks, which stops message processing until the connector is resumed. Any resources claimed by its tasks are left allocated, which allows the connector to begin processing data quickly once it is resumed.
      • PUT /connectors/{name}/stop - stop the connector and shut down its tasks, deallocating any resources claimed by its tasks. This is more efficient from a resource usage standpoint than pausing the connector, but can cause it to take longer to begin processing data once resumed. Note that the offsets for a connector can be only modified via the offsets management endpoints if it is in the stopped state
      • PUT /connectors/{name}/resume - resume a paused or stopped connector (or do nothing if the connector is not paused or stopped)
      • POST /connectors/{name}/restart?includeTasks=<true|false>&onlyFailed=<true|false> - restart a connector and its tasks instances.
        • the “includeTasks” parameter specifies whether to restart the connector instance and task instances (“includeTasks=true”) or just the connector instance (“includeTasks=false”), with the default (“false”) preserving the same behavior as earlier versions.
        • the “onlyFailed” parameter specifies whether to restart just the instances with a FAILED status (“onlyFailed=true”) or all instances (“onlyFailed=false”), with the default (“false”) preserving the same behavior as earlier versions.
      • POST /connectors/{name}/tasks/{taskId}/restart - restart an individual task (typically because it has failed)
      • DELETE /connectors/{name} - delete a connector, halting all tasks and deleting its configuration
      • GET /connectors/{name}/topics - get the set of topics that a specific connector is using since the connector was created or since a request to reset its set of active topics was issued
      • PUT /connectors/{name}/topics/reset - send a request to empty the set of active topics of a connector
      • Offsets management endpoints (see KIP-875 for more details):
        • GET /connectors/{name}/offsets - get the current offsets for a connector

        • DELETE /connectors/{name}/offsets - reset the offsets for a connector. The connector must exist and must be in the stopped state (see PUT /connectors/{name}/stop)

        • PATCH /connectors/{name}/offsets - alter the offsets for a connector. The connector must exist and must be in the stopped state (see PUT /connectors/{name}/stop). The request body should be a JSON object containing a JSON array offsets field, similar to the response body of the GET /connectors/{name}/offsets endpoint. An example request body for the FileStreamSourceConnector:

                {
          "offsets": [
            {
              "partition": {
                "filename": "test.txt"
              },
              "offset": {
                "position": 30
              }
            }
          ]
          

          }

      An example request body for the FileStreamSinkConnector:

                  {
            "offsets": [
              {
                "partition": {
                  "kafka_topic": "test",
                  "kafka_partition": 0
                },
                "offset": {
                  "kafka_offset": 5
                }
              },
              {
                "partition": {
                  "kafka_topic": "test",
                  "kafka_partition": 1
                },
                "offset": null
              }
            ]
          }
      

      The “offset” field may be null to reset the offset for a specific partition (applicable to both source and sink connectors). Note that the request body format depends on the connector implementation in the case of source connectors, whereas there is a common format across all sink connectors.

      Kafka Connect also provides a REST API for getting information about connector plugins:

      • GET /connector-plugins- return a list of connector plugins installed in the Kafka Connect cluster. Note that the API only checks for connectors on the worker that handles the request, which means you may see inconsistent results, especially during a rolling upgrade if you add new connector jars
      • GET /connector-plugins/{plugin-type}/config - get the configuration definition for the specified plugin.
      • PUT /connector-plugins/{connector-type}/config/validate - validate the provided configuration values against the configuration definition. This API performs per config validation, returns suggested values and error messages during validation.

      The following is a supported REST request at the top-level (root) endpoint:

      • GET /- return basic information about the Kafka Connect cluster such as the version of the Connect worker that serves the REST request (including git commit ID of the source code) and the Kafka cluster ID that is connected to.

      The admin.listeners configuration can be used to configure admin REST APIs on Kafka Connect’s REST API server. Similar to the listeners configuration, this field should contain a list of listeners in the following format: protocol://host:port,protocol2://host2:port2. Currently supported protocols are http and https. For example:

      admin.listeners=http://localhost:8080,https://localhost:8443
      

      By default, if admin.listeners is not configured, the admin REST APIs will be available on the regular listeners.

      The following are the currently supported admin REST API endpoints:

      • GET /admin/loggers - list the current loggers that have their levels explicitly set and their log levels
      • GET /admin/loggers/{name} - get the log level for the specified logger
      • PUT /admin/loggers/{name} - set the log level for the specified logger

      See KIP-495 for more details about the admin logger REST APIs.

      For the complete specification of the Kafka Connect REST API, see the OpenAPI documentation

      Error Reporting in Connect

      Kafka Connect provides error reporting to handle errors encountered along various stages of processing. By default, any error encountered during conversion or within transformations will cause the connector to fail. Each connector configuration can also enable tolerating such errors by skipping them, optionally writing each error and the details of the failed operation and problematic record (with various levels of detail) to the Connect application log. These mechanisms also capture errors when a sink connector is processing the messages consumed from its Kafka topics, and all of the errors can be written to a configurable “dead letter queue” (DLQ) Kafka topic.

      To report errors within a connector’s converter, transforms, or within the sink connector itself to the log, set errors.log.enable=true in the connector configuration to log details of each error and problem record’s topic, partition, and offset. For additional debugging purposes, set errors.log.include.messages=true to also log the problem record key, value, and headers to the log (note this may log sensitive information).

      To report errors within a connector’s converter, transforms, or within the sink connector itself to a dead letter queue topic, set errors.deadletterqueue.topic.name, and optionally errors.deadletterqueue.context.headers.enable=true.

      By default connectors exhibit “fail fast” behavior immediately upon an error or exception. This is equivalent to adding the following configuration properties with their defaults to a connector configuration:

      # disable retries on failure
      errors.retry.timeout=0
      
      # do not log the error and their contexts
      errors.log.enable=false
      
      # do not record errors in a dead letter queue topic
      errors.deadletterqueue.topic.name=
      
      # Fail on first error
      errors.tolerance=none
      

      These and other related connector configuration properties can be changed to provide different behavior. For example, the following configuration properties can be added to a connector configuration to setup error handling with multiple retries, logging to the application logs and the my-connector-errors Kafka topic, and tolerating all errors by reporting them rather than failing the connector task:

      # retry for at most 10 minutes times waiting up to 30 seconds between consecutive failures
      errors.retry.timeout=600000
      errors.retry.delay.max.ms=30000
      
      # log error context along with application logs, but do not include configs and messages
      errors.log.enable=true
      errors.log.include.messages=false
      
      # produce error context into the Kafka topic
      errors.deadletterqueue.topic.name=my-connector-errors
      
      # Tolerate all errors.
      errors.tolerance=all
      

      Exactly-once support

      Kafka Connect is capable of providing exactly-once semantics for sink connectors (as of version 0.11.0) and source connectors (as of version 3.3.0). Please note that support for exactly-once semantics is highly dependent on the type of connector you run. Even if you set all the correct worker properties in the configuration for each node in a cluster, if a connector is not designed to, or cannot take advantage of the capabilities of the Kafka Connect framework, exactly-once may not be possible.

      Sink connectors

      If a sink connector supports exactly-once semantics, to enable exactly-once at the Connect worker level, you must ensure its consumer group is configured to ignore records in aborted transactions. You can do this by setting the worker property consumer.isolation.level to read_committed or, if running a version of Kafka Connect that supports it, using a connector client config override policy that allows the consumer.override.isolation.level property to be set to read_committed in individual connector configs. There are no additional ACL requirements.

      Source connectors

      If a source connector supports exactly-once semantics, you must configure your Connect cluster to enable framework-level support for exactly-once source connectors. Additional ACLs may be necessary if running against a secured Kafka cluster. Note that exactly-once support for source connectors is currently only available in distributed mode; standalone Connect workers cannot provide exactly-once semantics.

      Worker configuration

      For new Connect clusters, set the exactly.once.source.support property to enabled in the worker config for each node in the cluster. For existing clusters, two rolling upgrades are necessary. During the first upgrade, the exactly.once.source.support property should be set to preparing, and during the second, it should be set to enabled.

      ACL requirements

      With exactly-once source support enabled, or with exactly.once.source.support set to preparing, the principal for each Connect worker will require the following ACLs:

      OperationResource TypeResource NameNote
      WriteTransactionalIdconnect-cluster-${groupId}, where ${groupId} is the group.id of the cluster
      DescribeTransactionalIdconnect-cluster-${groupId}, where ${groupId} is the group.id of the cluster
      IdempotentWriteClusterID of the Kafka cluster that hosts the worker’s config topicThe IdempotentWrite ACL has been deprecated as of 2.8 and will only be necessary for Connect clusters running on pre-2.8 Kafka clusters

      And with exactly-once source enabled (but not if exactly.once.source.support is set to preparing), the principal for each individual connector will require the following ACLs:

      OperationResource TypeResource NameNote
      WriteTransactionalId${groupId}-${connector}-${taskId}, for each task that the connector will create, where ${groupId} is the group.id of the Connect cluster, ${connector} is the name of the connector, and ${taskId} is the ID of the task (starting from zero)A wildcard prefix of ${groupId}-${connector}* can be used for convenience if there is no risk of conflict with other transactional IDs or if conflicts are acceptable to the user.
      DescribeTransactionalId${groupId}-${connector}-${taskId}, for each task that the connector will create, where ${groupId} is the group.id of the Connect cluster, ${connector} is the name of the connector, and ${taskId} is the ID of the task (starting from zero)A wildcard prefix of ${groupId}-${connector}* can be used for convenience if there is no risk of conflict with other transactional IDs or if conflicts are acceptable to the user.
      WriteTopicOffsets topic used by the connector, which is either the value of the offsets.storage.topic property in the connector’s configuration if provided, or the value of the offsets.storage.topic property in the worker’s configuration if not.
      ReadTopicOffsets topic used by the connector, which is either the value of the offsets.storage.topic property in the connector’s configuration if provided, or the value of the offsets.storage.topic property in the worker’s configuration if not.
      DescribeTopicOffsets topic used by the connector, which is either the value of the offsets.storage.topic property in the connector’s configuration if provided, or the value of the offsets.storage.topic property in the worker’s configuration if not.
      CreateTopicOffsets topic used by the connector, which is either the value of the offsets.storage.topic property in the connector’s configuration if provided, or the value of the offsets.storage.topic property in the worker’s configuration if not.Only necessary if the offsets topic for the connector does not exist yet
      IdempotentWriteClusterID of the Kafka cluster that the source connector writes toThe IdempotentWrite ACL has been deprecated as of 2.8 and will only be necessary for Connect clusters running on pre-2.8 Kafka clusters

      Plugin Discovery

      Plugin discovery is the name for the strategy which the Connect worker uses to find plugin classes and make them accessible to configure and run in connectors. This is controlled by the plugin.discovery worker configuration, and has a significant impact on worker startup time. service_load is the fastest strategy, but care should be taken to verify that plugins are compatible before setting this configuration to service_load.

      Prior to version 3.6, this strategy was not configurable, and behaved like the only_scan mode which is compatible with all plugins. For version 3.6 and later, this mode defaults to hybrid_warn which is also compatible with all plugins, but logs a warning for plugins which are incompatible with service_load. The hybrid_fail strategy stops the worker with an error if a plugin incompatible with service_load is detected, asserting that all plugins are compatible. Finally, the service_load strategy disables the slow legacy scanning mechanism used in all other modes, and instead uses the faster ServiceLoader mechanism. Plugins which are incompatible with that mechanism may be unusable.

      Verifying Plugin Compatibility

      To verify if all of your plugins are compatible with service_load, first ensure that you are using version 3.6 or later of Kafka Connect. You can then perform one of the following checks:

      • Start your worker with the default hybrid_warnstrategy, and WARN logs enabled for the org.apache.kafka.connect package. At least one WARN log message mentioning the plugin.discovery configuration should be printed. This log message will explicitly say that all plugins are compatible, or list the incompatible plugins.
      • Start your worker in a test environment with hybrid_fail. If all plugins are compatible, startup will succeed. If at least one plugin is not compatible the worker will fail to start up, and all incompatible plugins will be listed in the exception.

      If the verification step succeeds, then your current set of installed plugins is compatible, and it should be safe to change the plugin.discovery configuration to service_load. If the verification fails, you cannot use service_load strategy and should take note of the list of incompatible plugins. All plugins must be addressed before using the service_load strategy. It is recommended to perform this verification after installing or changing plugin versions, and the verification can be done automatically in a Continuous Integration environment.

      Operators: Artifact Migration

      As an operator of Connect, if you discover incompatible plugins, there are multiple ways to resolve the incompatibility. They are listed below from most to least preferable.

      1. Check the latest release from your plugin provider, and if it is compatible, upgrade.
      2. Contact your plugin provider and request that they migrate the plugin to be compatible, following the source migration instructions, and then upgrade to the compatible version.
      3. Migrate the plugin artifacts yourself using the included migration script.

      The migration script is located in bin/connect-plugin-path.sh and bin\windows\connect-plugin-path.bat of your Kafka installation. The script can migrate incompatible plugin artifacts already installed on your Connect worker’s plugin.path by adding or modifying JAR or resource files. This is not suitable for environments using code-signing, as this can change artifacts such that they will fail signature verification. View the built-in help with --help.

      To perform a migration, first use the list subcommand to get an overview of the plugins available to the script. You must tell the script where to find plugins, which can be done with the repeatable --worker-config, --plugin-path, and --plugin-location arguments. The script will ignore plugins on the classpath, so any custom plugins on your classpath should be moved to the plugin path in order to be used with this migration script, or migrated manually. Be sure to compare the output of list with the worker startup warning or error message to ensure that all of your affected plugins are found by the script.

      Once you see that all incompatible plugins are included in the listing, you can proceed to dry-run the migration with sync-manifests --dry-run. This will perform all parts of the migration, except for writing the results of the migration to disk. Note that the sync-manifests command requires all specified paths to be writable, and may alter the contents of the directories. Make a backup of your plugins in the specified paths, or copy them to a writable directory.

      Ensure that you have a backup of your plugins and the dry-run succeeds before removing the --dry-run flag and actually running the migration. If the migration fails without the --dry-run flag, then the partially migrated artifacts should be discarded. The migration is idempotent, so running it multiple times and on already-migrated plugins is safe. After the script finishes, you should verify the migration is complete. The migration script is suitable for use in a Continuous Integration environment for automatic migration.

      Developers: Source Migration

      To make plugins compatible with service_load, it is necessary to add ServiceLoader manifests to your source code, which should then be packaged in the release artifact. Manifests are resource files in META-INF/services/ named after their superclass type, and contain a list of fully-qualified subclass names, one on each line.

      In order for a plugin to be compatible, it must appear as a line in a manifest corresponding to the plugin superclass it extends. If a single plugin implements multiple plugin interfaces, then it should appear in a manifest for each interface it implements. If you have no classes for a certain type of plugin, you do not need to include a manifest file for that type. If you have classes which should not be visible as plugins, they should be marked abstract. The following types are expected to have manifests:

      • org.apache.kafka.connect.sink.SinkConnector
      • org.apache.kafka.connect.source.SourceConnector
      • org.apache.kafka.connect.storage.Converter
      • org.apache.kafka.connect.storage.HeaderConverter
      • org.apache.kafka.connect.transforms.Transformation
      • org.apache.kafka.connect.transforms.predicates.Predicate
      • org.apache.kafka.common.config.provider.ConfigProvider
      • org.apache.kafka.connect.rest.ConnectRestExtension
      • org.apache.kafka.connect.connector.policy.ConnectorClientConfigOverridePolicy

      For example, if you only have one connector with the fully-qualified name com.example.MySinkConnector, then only one manifest file must be added to resources in META-INF/services/org.apache.kafka.connect.sink.SinkConnector, and the contents should be similar to the following:

      # license header or comment
      com.example.MySinkConnector
      

      You should then verify that your manifests are correct by using the verification steps with a pre-release artifact. If the verification succeeds, you can then release the plugin normally, and operators can upgrade to the compatible version.

      8.3 - Connector Development Guide

      Connector Development Guide

      Connector Development Guide

      This guide describes how developers can write new connectors for Kafka Connect to move data between Kafka and other systems. It briefly reviews a few key concepts and then describes how to create a simple connector.

      Core Concepts and APIs

      Connectors and Tasks

      To copy data between Kafka and another system, users create a Connector for the system they want to pull data from or push data to. Connectors come in two flavors: SourceConnectors import data from another system (e.g. JDBCSourceConnector would import a relational database into Kafka) and SinkConnectors export data (e.g. HDFSSinkConnector would export the contents of a Kafka topic to an HDFS file).

      Connectors do not perform any data copying themselves: their configuration describes the data to be copied, and the Connector is responsible for breaking that job into a set of Tasks that can be distributed to workers. These Tasks also come in two corresponding flavors: SourceTask and SinkTask.

      With an assignment in hand, each Task must copy its subset of the data to or from Kafka. In Kafka Connect, it should always be possible to frame these assignments as a set of input and output streams consisting of records with consistent schemas. Sometimes this mapping is obvious: each file in a set of log files can be considered a stream with each parsed line forming a record using the same schema and offsets stored as byte offsets in the file. In other cases it may require more effort to map to this model: a JDBC connector can map each table to a stream, but the offset is less clear. One possible mapping uses a timestamp column to generate queries incrementally returning new data, and the last queried timestamp can be used as the offset.

      Streams and Records

      Each stream should be a sequence of key-value records. Both the keys and values can have complex structure – many primitive types are provided, but arrays, objects, and nested data structures can be represented as well. The runtime data format does not assume any particular serialization format; this conversion is handled internally by the framework.

      In addition to the key and value, records (both those generated by sources and those delivered to sinks) have associated stream IDs and offsets. These are used by the framework to periodically commit the offsets of data that have been processed so that in the event of failures, processing can resume from the last committed offsets, avoiding unnecessary reprocessing and duplication of events.

      Dynamic Connectors

      Not all jobs are static, so Connector implementations are also responsible for monitoring the external system for any changes that might require reconfiguration. For example, in the JDBCSourceConnector example, the Connector might assign a set of tables to each Task. When a new table is created, it must discover this so it can assign the new table to one of the Tasks by updating its configuration. When it notices a change that requires reconfiguration (or a change in the number of Tasks), it notifies the framework and the framework updates any corresponding Tasks.

      Developing a Simple Connector

      Developing a connector only requires implementing two interfaces, the Connector and Task. A simple example is included with the source code for Kafka in the file package. This connector is meant for use in standalone mode and has implementations of a SourceConnector/SourceTask to read each line of a file and emit it as a record and a SinkConnector/SinkTask that writes each record to a file.

      The rest of this section will walk through some code to demonstrate the key steps in creating a connector, but developers should also refer to the full example source code as many details are omitted for brevity.

      Connector Example

      We’ll cover the SourceConnector as a simple example. SinkConnector implementations are very similar. Pick a package and class name, these examples will use the FileStreamSourceConnector but substitute your own class name where appropriate. In order to make the plugin discoverable at runtime, add a ServiceLoader manifest to your resources in META-INF/services/org.apache.kafka.connect.source.SourceConnector with your fully-qualified class name on a single line:

      com.example.FileStreamSourceConnector
      

      Create a class that inherits from SourceConnector and add a field that will store the configuration information to be propagated to the task(s) (the topic to send data to, and optionally - the filename to read from and the maximum batch size):

      package com.example;
      
      public class FileStreamSourceConnector extends SourceConnector {
          private Map<String, String> props;
      

      The easiest method to fill in is taskClass(), which defines the class that should be instantiated in worker processes to actually read the data:

      @Override
      public Class<? extends Task> taskClass() {
          return FileStreamSourceTask.class;
      }
      

      We will define the FileStreamSourceTask class below. Next, we add some standard lifecycle methods, start() and stop():

      @Override
      public void start(Map<String, String> props) {
          // Initialization logic and setting up of resources can take place in this method.
          // This connector doesn't need to do any of that, but we do log a helpful message to the user.
      
          this.props = props;
          AbstractConfig config = new AbstractConfig(CONFIG_DEF, props);
          String filename = config.getString(FILE_CONFIG);
          filename = (filename == null || filename.isEmpty()) ? "standard input" : config.getString(FILE_CONFIG);
          log.info("Starting file source connector reading from {}", filename);
      }
      
      @Override
      public void stop() {
          // Nothing to do since no background monitoring is required.
      }
      

      Finally, the real core of the implementation is in taskConfigs(). In this case we are only handling a single file, so even though we may be permitted to generate more tasks as per the maxTasks argument, we return a list with only one entry:

      @Override
      public List<Map<String, String>> taskConfigs(int maxTasks) {
          // Note that the task configs could contain configs additional to or different from the connector configs if needed. For instance,
          // if different tasks have different responsibilities, or if different tasks are meant to process different subsets of the source data stream).
          ArrayList<Map<String, String>> configs = new ArrayList<>();
          // Only one input stream makes sense.
          configs.add(props);
          return configs;
      }
      

      Even with multiple tasks, this method implementation is usually pretty simple. It just has to determine the number of input tasks, which may require contacting the remote service it is pulling data from, and then divvy them up. Because some patterns for splitting work among tasks are so common, some utilities are provided in ConnectorUtils to simplify these cases.

      Note that this simple example does not include dynamic input. See the discussion in the next section for how to trigger updates to task configs.

      Task Example - Source Task

      Next we’ll describe the implementation of the corresponding SourceTask. The implementation is short, but too long to cover completely in this guide. We’ll use pseudo-code to describe most of the implementation, but you can refer to the source code for the full example.

      Just as with the connector, we need to create a class inheriting from the appropriate base Task class. It also has some standard lifecycle methods:

      public class FileStreamSourceTask extends SourceTask {
          private String filename;
          private InputStream stream;
          private String topic;
          private int batchSize;
      
          @Override
          public void start(Map<String, String> props) {
              filename = props.get(FileStreamSourceConnector.FILE_CONFIG);
              stream = openOrThrowError(filename);
              topic = props.get(FileStreamSourceConnector.TOPIC_CONFIG);
              batchSize = props.get(FileStreamSourceConnector.TASK_BATCH_SIZE_CONFIG);
          }
      
          @Override
          public synchronized void stop() {
              stream.close();
          }
      }
      

      These are slightly simplified versions, but show that these methods should be relatively simple and the only work they should perform is allocating or freeing resources. There are two points to note about this implementation. First, the start() method does not yet handle resuming from a previous offset, which will be addressed in a later section. Second, the stop() method is synchronized. This will be necessary because SourceTasks are given a dedicated thread which they can block indefinitely, so they need to be stopped with a call from a different thread in the Worker.

      Next, we implement the main functionality of the task, the poll() method which gets events from the input system and returns a List<SourceRecord>:

      @Override
      public List<SourceRecord> poll() throws InterruptedException {
          try {
              ArrayList<SourceRecord> records = new ArrayList<>();
              while (streamValid(stream) && records.isEmpty()) {
                  LineAndOffset line = readToNextLine(stream);
                  if (line != null) {
                      Map<String, Object> sourcePartition = Collections.singletonMap("filename", filename);
                      Map<String, Object> sourceOffset = Collections.singletonMap("position", streamOffset);
                      records.add(new SourceRecord(sourcePartition, sourceOffset, topic, Schema.STRING_SCHEMA, line));
                      if (records.size() >= batchSize) {
                          return records;
                      }
                  } else {
                      Thread.sleep(1);
                  }
              }
              return records;
          } catch (IOException e) {
              // Underlying stream was killed, probably as a result of calling stop. Allow to return
              // null, and driving thread will handle any shutdown if necessary.
          }
          return null;
      }
      

      Again, we’ve omitted some details, but we can see the important steps: the poll() method is going to be called repeatedly, and for each call it will loop trying to read records from the file. For each line it reads, it also tracks the file offset. It uses this information to create an output SourceRecord with four pieces of information: the source partition (there is only one, the single file being read), source offset (byte offset in the file), output topic name, and output value (the line, and we include a schema indicating this value will always be a string). Other variants of the SourceRecord constructor can also include a specific output partition, a key, and headers.

      Note that this implementation uses the normal Java InputStream interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic poll() interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and is compatible with older versions of Java.

      Although not used in the example, SourceTask also provides two APIs to commit offsets in the source system: commit and commitRecord. The APIs are provided for source systems which have an acknowledgement mechanism for messages. Overriding these methods allows the source connector to acknowledge messages in the source system, either in bulk or individually, once they have been written to Kafka. The commit API stores the offsets in the source system, up to the offsets that have been returned by poll. The implementation of this API should block until the commit is complete. The commitRecord API saves the offset in the source system for each SourceRecord after it is written to Kafka. As Kafka Connect will record offsets automatically, SourceTasks are not required to implement them. In cases where a connector does need to acknowledge messages in the source system, only one of the APIs is typically required.

      Sink Tasks

      The previous section described how to implement a simple SourceTask. Unlike SourceConnector and SinkConnector, SourceTask and SinkTask have very different interfaces because SourceTask uses a pull interface and SinkTask uses a push interface. Both share the common lifecycle methods, but the SinkTask interface is quite different:

      public abstract class SinkTask implements Task {
          public void initialize(SinkTaskContext context) {
              this.context = context;
          }
      
          public abstract void put(Collection<SinkRecord> records);
      
          public void flush(Map<TopicPartition, OffsetAndMetadata> currentOffsets) {
          }
      }
      

      The SinkTask documentation contains full details, but this interface is nearly as simple as the SourceTask. The put() method should contain most of the implementation, accepting sets of SinkRecords, performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering will be useful so an entire batch of records can be sent at once, reducing the overhead of inserting events into the downstream data store. The SinkRecords contain essentially the same information as SourceRecords: Kafka topic, partition, offset, the event key and value, and optional headers.

      The flush() method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The offsets parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the flush() operation atomically commits the data and offsets to a final location in HDFS.

      Errant Record Reporter

      When error reporting is enabled for a connector, the connector can use an ErrantRecordReporter to report problems with individual records sent to a sink connector. The following example shows how a connector’s SinkTask subclass might obtain and use the ErrantRecordReporter, safely handling a null reporter when the DLQ is not enabled or when the connector is installed in an older Connect runtime that doesn’t have this reporter feature:

      <

      private ErrantRecordReporter reporter;
      
      @Override
      public void start(Map<String, String> props) {
          ...
          try {
              reporter = context.errantRecordReporter(); // may be null if DLQ not enabled
          } catch (NoSuchMethodException | NoClassDefFoundError e) {
              // Will occur in Connect runtimes earlier than 2.6
              reporter = null;
          }
      }
      
      @Override
      public void put(Collection<SinkRecord> records) {
          for (SinkRecord record: records) {
              try {
                  // attempt to process and send record to data sink
                  process(record);
              } catch(Exception e) {
                  if (reporter != null) {
                      // Send errant record to error reporter
                      reporter.report(record, e);
                  } else {
                      // There's no error reporter, so fail
                      throw new ConnectException("Failed on record", e);
                  }
              }
          }
      }
      

      Resuming from Previous Offsets

      The SourceTask implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit process is completely automated by the framework, but only the connector knows how to seek back to the right position in the input stream to resume from that location.

      To correctly resume upon startup, the task can use the SourceContext passed into its initialize() method to access the offset data. In initialize(), we would add a bit more code to read the offset (if it exists) and seek to that position:

      stream = new FileInputStream(filename);
      Map<String, Object> offset = context.offsetStorageReader().offset(Collections.singletonMap(FILENAME_FIELD, filename));
      if (offset != null) {
          Long lastRecordedOffset = (Long) offset.get("position");
          if (lastRecordedOffset != null)
              seekToOffset(stream, lastRecordedOffset);
      }
      

      Of course, you might need to read many keys for each of the input streams. The OffsetStorageReader interface also allows you to issue bulk reads to efficiently load all offsets, then apply them by seeking each input stream to the appropriate position.

      Exactly-once source connectors

      Supporting exactly-once

      With the passing of KIP-618, Kafka Connect supports exactly-once source connectors as of version 3.3.0. In order for a source connector to take advantage of this support, it must be able to provide meaningful source offsets for each record that it emits, and resume consumption from the external system at the exact position corresponding to any of those offsets without dropping or duplicating messages.

      Defining transaction boundaries

      By default, the Kafka Connect framework will create and commit a new Kafka transaction for each batch of records that a source task returns from its poll method. However, connectors can also define their own transaction boundaries, which can be enabled by users by setting the transaction.boundary property to connector in the config for the connector.

      If enabled, the connector’s tasks will have access to a TransactionContext from their SourceTaskContext, which they can use to control when transactions are aborted and committed.

      For example, to commit a transaction at least every ten records:

      private int recordsSent;
      
      @Override
      public void start(Map<String, String> props) {
          this.recordsSent = 0;
      }
      
      @Override
      public List<SourceRecord> poll() {
          List<SourceRecord> records = fetchRecords();
          boolean shouldCommit = false;
          for (SourceRecord record : records) {
              if (++this.recordsSent >= 10) {
                  shouldCommit = true;
              }
          }
          if (shouldCommit) {
              this.recordsSent = 0;
              this.context.transactionContext().commitTransaction();
          }
          return records;
      }
      

      Or to commit a transaction for exactly every tenth record:

      private int recordsSent;
      
      @Override
      public void start(Map<String, String> props) {
          this.recordsSent = 0;
      }
      
      @Override
      public List<SourceRecord> poll() {
          List<SourceRecord> records = fetchRecords();
          for (SourceRecord record : records) {
              if (++this.recordsSent % 10 == 0) {
                  this.context.transactionContext().commitTransaction(record);
              }
          }
          return records;
      }
      

      Most connectors do not need to define their own transaction boundaries. However, it may be useful if files or objects in the source system are broken up into multiple source records, but should be delivered atomically. Additionally, it may be useful if it is impossible to give each source record a unique source offset, if every record with a given offset is delivered within a single transaction.

      Note that if the user has not enabled connector-defined transaction boundaries in the connector configuration, the TransactionContext returned by context.transactionContext() will be null.

      Validation APIs

      A few additional preflight validation APIs can be implemented by source connector developers.

      Some users may require exactly-once semantics from a connector. In this case, they may set the exactly.once.support property to required in the configuration for the connector. When this happens, the Kafka Connect framework will ask the connector whether it can provide exactly-once semantics with the specified configuration. This is done by invoking the exactlyOnceSupport method on the connector.

      If a connector doesn’t support exactly-once semantics, it should still implement this method to let users know for certain that it cannot provide exactly-once semantics:

      @Override
      public ExactlyOnceSupport exactlyOnceSupport(Map<String, String> props) {
          // This connector cannot provide exactly-once semantics under any conditions
          return ExactlyOnceSupport.UNSUPPORTED;
      }
      

      Otherwise, a connector should examine the configuration, and return ExactlyOnceSupport.SUPPORTED if it can provide exactly-once semantics:

      @Override
      public ExactlyOnceSupport exactlyOnceSupport(Map<String, String> props) {
          // This connector can always provide exactly-once semantics
          return ExactlyOnceSupport.SUPPORTED;
      }
      

      Additionally, if the user has configured the connector to define its own transaction boundaries, the Kafka Connect framework will ask the connector whether it can define its own transaction boundaries with the specified configuration, using the canDefineTransactionBoundaries method:

      @Override
      public ConnectorTransactionBoundaries canDefineTransactionBoundaries(Map<String, String> props) {
          // This connector can always define its own transaction boundaries
          return ConnectorTransactionBoundaries.SUPPORTED;
      }
      

      This method should only be implemented for connectors that can define their own transaction boundaries in some cases. If a connector is never able to define its own transaction boundaries, it does not need to implement this method.

      Dynamic Input/Output Streams

      Kafka Connect is intended to define bulk data copying jobs, such as copying an entire database rather than creating many jobs to copy each table individually. One consequence of this design is that the set of input or output streams for a connector can vary over time.

      Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the ConnectorContext object that reconfiguration is necessary. For example, in a SourceConnector:

      if (inputsChanged())
          this.context.requestTaskReconfiguration();
      

      The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the SourceConnector this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.

      Ideally this code for monitoring changes would be isolated to the Connector and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the Task encounters the issue before the Connector, which will be common if the Connector needs to poll for changes, the Task will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.

      SinkConnectors usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. SinkTasks should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple SinkTasks seeing a new input stream for the first time and simultaneously trying to create the new resource. SinkConnectors, on the other hand, will generally require no special code for handling a dynamic set of streams.

      Configuration Validation

      Kafka Connect allows you to validate connector configurations before submitting a connector to be executed and can provide feedback about errors and recommended values. To take advantage of this, connector developers need to provide an implementation of config() to expose the configuration definition to the framework.

      The following code in FileStreamSourceConnector defines the configuration and exposes it to the framework.

      static final ConfigDef CONFIG_DEF = new ConfigDef()
          .define(FILE_CONFIG, Type.STRING, null, Importance.HIGH, "Source filename. If not specified, the standard input will be used")
          .define(TOPIC_CONFIG, Type.STRING, ConfigDef.NO_DEFAULT_VALUE, new ConfigDef.NonEmptyString(), Importance.HIGH, "The topic to publish data to")
          .define(TASK_BATCH_SIZE_CONFIG, Type.INT, DEFAULT_TASK_BATCH_SIZE, Importance.LOW,
              "The maximum number of records the source task can read from the file each time it is polled");
      
      public ConfigDef config() {
          return CONFIG_DEF;
      }
      

      ConfigDef class is used for specifying the set of expected configurations. For each configuration, you can specify the name, the type, the default value, the documentation, the group information, the order in the group, the width of the configuration value and the name suitable for display in the UI. Plus, you can provide special validation logic used for single configuration validation by overriding the Validator class. Moreover, as there may be dependencies between configurations, for example, the valid values and visibility of a configuration may change according to the values of other configurations. To handle this, ConfigDef allows you to specify the dependents of a configuration and to provide an implementation of Recommender to get valid values and set visibility of a configuration given the current configuration values.

      Also, the validate() method in Connector provides a default validation implementation which returns a list of allowed configurations together with configuration errors and recommended values for each configuration. However, it does not use the recommended values for configuration validation. You may provide an override of the default implementation for customized configuration validation, which may use the recommended values.

      Working with Schemas

      The FileStream connectors are good examples because they are simple, but they also have trivially structured data – each line is just a string. Almost all practical connectors will need schemas with more complex data formats.

      To create more complex data, you’ll need to work with the Kafka Connect data API. Most structured records will need to interact with two classes in addition to primitive types: Schema and Struct.

      The API documentation provides a complete reference, but here is a simple example creating a Schema and Struct:

      Schema schema = SchemaBuilder.struct().name(NAME)
          .field("name", Schema.STRING_SCHEMA)
          .field("age", Schema.INT_SCHEMA)
          .field("admin", SchemaBuilder.bool().defaultValue(false).build())
          .build();
      
      Struct struct = new Struct(schema)
          .put("name", "Barbara Liskov")
          .put("age", 75);
      

      If you are implementing a source connector, you’ll need to decide when and how to create schemas. Where possible, you should avoid recomputing them as much as possible. For example, if your connector is guaranteed to have a fixed schema, create it statically and reuse a single instance.

      However, many connectors will have dynamic schemas. One simple example of this is a database connector. Considering even just a single table, the schema will not be predefined for the entire connector (as it varies from table to table). But it also may not be fixed for a single table over the lifetime of the connector since the user may execute an ALTER TABLE command. The connector must be able to detect these changes and react appropriately.

      Sink connectors are usually simpler because they are consuming data and therefore do not need to create schemas. However, they should take just as much care to validate that the schemas they receive have the expected format. When the schema does not match – usually indicating the upstream producer is generating invalid data that cannot be correctly translated to the destination system – sink connectors should throw an exception to indicate this error to the system.

      8.4 - Administration

      Administration

      Administration

      Kafka Connect’s REST layer provides a set of APIs to enable administration of the cluster. This includes APIs to view the configuration of connectors and the status of their tasks, as well as to alter their current behavior (e.g. changing configuration and restarting tasks).

      When a connector is first submitted to the cluster, a rebalance is triggered between the Connect workers in order to distribute the load that consists of the tasks of the new connector. This same rebalancing procedure is also used when connectors increase or decrease the number of tasks they require, when a connector’s configuration is changed, or when a worker is added or removed from the group as part of an intentional upgrade of the Connect cluster or due to a failure.

      In versions prior to 2.3.0, the Connect workers would rebalance the full set of connectors and their tasks in the cluster as a simple way to make sure that each worker has approximately the same amount of work. This behavior can be still enabled by setting connect.protocol=eager.

      Starting with 2.3.0, Kafka Connect is using by default a protocol that performs incremental cooperative rebalancing that incrementally balances the connectors and tasks across the Connect workers, affecting only tasks that are new, to be removed, or need to move from one worker to another. Other tasks are not stopped and restarted during the rebalance, as they would have been with the old protocol.

      If a Connect worker leaves the group, intentionally or due to a failure, Connect waits for scheduled.rebalance.max.delay.ms before triggering a rebalance. This delay defaults to five minutes (300000ms) to tolerate failures or upgrades of workers without immediately redistributing the load of a departing worker. If this worker returns within the configured delay, it gets its previously assigned tasks in full. However, this means that the tasks will remain unassigned until the time specified by scheduled.rebalance.max.delay.ms elapses. If a worker does not return within that time limit, Connect will reassign those tasks among the remaining workers in the Connect cluster.

      The new Connect protocol is enabled when all the workers that form the Connect cluster are configured with connect.protocol=compatible, which is also the default value when this property is missing. Therefore, upgrading to the new Connect protocol happens automatically when all the workers upgrade to 2.3.0. A rolling upgrade of the Connect cluster will activate incremental cooperative rebalancing when the last worker joins on version 2.3.0.

      You can use the REST API to view the current status of a connector and its tasks, including the ID of the worker to which each was assigned. For example, the GET /connectors/file-source/status request shows the status of a connector named file-source:

      {
          "name": "file-source",
          "connector": {
              "state": "RUNNING",
              "worker_id": "192.168.1.208:8083"
          },
          "tasks": [
              {
                  "id": 0,
                  "state": "RUNNING",
                  "worker_id": "192.168.1.209:8083"
              }
          ]
      }
      

      Connectors and their tasks publish status updates to a shared topic (configured with status.storage.topic) which all workers in the cluster monitor. Because the workers consume this topic asynchronously, there is typically a (short) delay before a state change is visible through the status API. The following states are possible for a connector or one of its tasks:

      • UNASSIGNED: The connector/task has not yet been assigned to a worker.
      • RUNNING: The connector/task is running.
      • PAUSED: The connector/task has been administratively paused.
      • STOPPED: The connector has been stopped. Note that this state is not applicable to tasks because the tasks for a stopped connector are shut down and won’t be visible in the status API.
      • FAILED: The connector/task has failed (usually by raising an exception, which is reported in the status output).
      • RESTARTING: The connector/task is either actively restarting or is expected to restart soon

      In most cases, connector and task states will match, though they may be different for short periods of time when changes are occurring or if tasks have failed. For example, when a connector is first started, there may be a noticeable delay before the connector and its tasks have all transitioned to the RUNNING state. States will also diverge when tasks fail since Connect does not automatically restart failed tasks. To restart a connector/task manually, you can use the restart APIs listed above. Note that if you try to restart a task while a rebalance is taking place, Connect will return a 409 (Conflict) status code. You can retry after the rebalance completes, but it might not be necessary since rebalances effectively restart all the connectors and tasks in the cluster.

      Starting with 2.5.0, Kafka Connect uses the status.storage.topic to also store information related to the topics that each connector is using. Connect Workers use these per-connector topic status updates to respond to requests to the REST endpoint GET /connectors/{name}/topics by returning the set of topic names that a connector is using. A request to the REST endpoint PUT /connectors/{name}/topics/reset resets the set of active topics for a connector and allows a new set to be populated, based on the connector’s latest pattern of topic usage. Upon connector deletion, the set of the connector’s active topics is also deleted. Topic tracking is enabled by default but can be disabled by setting topic.tracking.enable=false. If you want to disallow requests to reset the active topics of connectors during runtime, set the Worker property topic.tracking.allow.reset=false.

      It’s sometimes useful to temporarily stop the message processing of a connector. For example, if the remote system is undergoing maintenance, it would be preferable for source connectors to stop polling it for new data instead of filling logs with exception spam. For this use case, Connect offers a pause/resume API. While a source connector is paused, Connect will stop polling it for additional records. While a sink connector is paused, Connect will stop pushing new messages to it. The pause state is persistent, so even if you restart the cluster, the connector will not begin message processing again until the task has been resumed. Note that there may be a delay before all of a connector’s tasks have transitioned to the PAUSED state since it may take time for them to finish whatever processing they were in the middle of when being paused. Additionally, failed tasks will not transition to the PAUSED state until they have been restarted.

      In 3.5.0, Connect introduced a stop API that completely shuts down the tasks for a connector and deallocates any resources claimed by them. This is different from pausing a connector where tasks are left idling and any resources claimed by them are left allocated (which allows the connector to begin processing data quickly once it is resumed). Stopping a connector is more efficient from a resource usage standpoint than pausing it, but can cause it to take longer to begin processing data once resumed. Note that the offsets for a connector can be only modified via the offsets management endpoints if it is in the stopped state.

      9 - Kafka Streams

      9.1 - Introduction

      Kafka Streams

      Introduction Run Demo App Tutorial: Write App Concepts Architecture Developer Guide Javadoc Upgrade

      The easiest way to write mission-critical real-time applications and microservices

      Kafka Streams is a client library for building applications and microservices, where the input and output data are stored in Kafka clusters. It combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka’s server-side cluster technology.

      (Clicking the image will load a video from YouTube) (Clicking the image will load a video from YouTube) (Clicking the image will load a video from YouTube) (Clicking the image will load a video from YouTube)

      TOUR OF THE STREAMS API

      1Intro to Streams

      2Creating a Streams Application

      3Transforming Data Pt. 1

      4Transforming Data Pt. 2


      Why you’ll love using Kafka Streams!

      • Elastic, highly scalable, fault-tolerant
      • Deploy to containers, VMs, bare metal, cloud
      • Equally viable for small, medium, & large use cases
      • Fully integrated with Kafka security
      • Write standard Java and Scala applications
      • Exactly-once processing semantics
      • No separate processing cluster required
      • Develop on Mac, Linux, Windows

      Write your first app


      Kafka Streams use cases

      The New York Times uses Apache Kafka and the Kafka Streams to store and distribute, in real-time, published content to the various applications and systems that make it available to the readers.

      As the leading online fashion retailer in Europe, Zalando uses Kafka as an ESB (Enterprise Service Bus), which helps us in transitioning from a monolithic to a micro services architecture. Using Kafka for processing event streams enables our technical team to do near-real time business intelligence.

      LINE uses Apache Kafka as a central datahub for our services to communicate to one another. Hundreds of billions of messages are produced daily and are used to execute various business logic, threat detection, search indexing and data analysis. LINE leverages Kafka Streams to reliably transform and filter topics enabling sub topics consumers can efficiently consume, meanwhile retaining easy maintainability thanks to its sophisticated yet minimal code base.

      Pinterest uses Apache Kafka and the Kafka Streams at large scale to power the real-time, predictive budgeting system of their advertising infrastructure. With Kafka Streams, spend predictions are more accurate than ever.

      Rabobank is one of the 3 largest banks in the Netherlands. Its digital nervous system, the Business Event Bus, is powered by Apache Kafka. It is used by an increasing amount of financial processes and services, one of which is Rabo Alerts. This service alerts customers in real-time upon financial events and is built using Kafka Streams.

      Trivago is a global hotel search platform. We are focused on reshaping the way travelers search for and compare hotels, while enabling hotel advertisers to grow their businesses by providing access to a broad audience of travelers via our websites and apps. As of 2017, we offer access to approximately 1.8 million hotels and other accommodations in over 190 countries. We use Kafka, Kafka Connect, and Kafka Streams to enable our developers to access data freely in the company. Kafka Streams powers parts of our analytics pipeline and delivers endless options to explore and operate on the data sources we have at hand.

      Hello Kafka Streams

      The code example below implements a WordCount application that is elastic, highly scalable, fault-tolerant, stateful, and ready to run in production at large scale

      Java Scala

      import org.apache.kafka.common.serialization.Serdes;
      import org.apache.kafka.common.utils.Bytes;
      import org.apache.kafka.streams.KafkaStreams;
      import org.apache.kafka.streams.StreamsBuilder;
      import org.apache.kafka.streams.StreamsConfig;
      import org.apache.kafka.streams.kstream.KStream;
      import org.apache.kafka.streams.kstream.KTable;
      import org.apache.kafka.streams.kstream.Materialized;
      import org.apache.kafka.streams.kstream.Produced;
      import org.apache.kafka.streams.state.KeyValueStore;
      
      import java.util.Arrays;
      import java.util.Properties;
      
      public class WordCountApplication {
      
         public static void main(final String[] args) throws Exception {
             Properties props = new Properties();
             props.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application");
             props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092");
             props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
             props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
      
             StreamsBuilder builder = new StreamsBuilder();
             KStream<String, String> textLines = builder.stream("TextLinesTopic");
             KTable<String, Long> wordCounts = textLines
                 .flatMapValues(textLine -> Arrays.asList(textLine.toLowerCase().split("\W+")))
                 .groupBy((key, word) -> word)
                 .count(Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("counts-store"));
             wordCounts.toStream().to("WordsWithCountsTopic", Produced.with(Serdes.String(), Serdes.Long()));
      
             KafkaStreams streams = new KafkaStreams(builder.build(), props);
             streams.start();
         }
      
      }
      
      
      import java.util.Properties
      import java.util.concurrent.TimeUnit
      
      import org.apache.kafka.streams.kstream.Materialized
      import org.apache.kafka.streams.scala.ImplicitConversions._
      import org.apache.kafka.streams.scala._
      import org.apache.kafka.streams.scala.kstream._
      import org.apache.kafka.streams.{KafkaStreams, StreamsConfig}
      
      object WordCountApplication extends App {
        import Serdes._
      
        val props: Properties = {
          val p = new Properties()
          p.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application")
          p.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092")
          p
        }
      
        val builder: StreamsBuilder = new StreamsBuilder
        val textLines: KStream[String, String] = builder.stream[String, String]("TextLinesTopic")
        val wordCounts: KTable[String, Long] = textLines
          .flatMapValues(textLine => textLine.toLowerCase.split("\W+"))
          .groupBy((_, word) => word)
          .count()(Materialized.as("counts-store"))
        wordCounts.toStream.to("WordsWithCountsTopic")
      
        val streams: KafkaStreams = new KafkaStreams(builder.build(), props)
        streams.start()
      
        sys.ShutdownHookThread {
           streams.close(10, TimeUnit.SECONDS)
        }
      }
      

      Previous Next

      9.2 - Quick Start

      Run Kafka Streams Demo Application

      Introduction Run Demo App Tutorial: Write App Concepts Architecture Developer Guide Upgrade

      This tutorial assumes you are starting fresh and have no existing Kafka data. However, if you have already started Kafka, feel free to skip the first two steps.

      Kafka Streams is a client library for building mission-critical real-time applications and microservices, where the input and/or output data is stored in Kafka clusters. Kafka Streams combines the simplicity of writing and deploying standard Java and Scala applications on the client side with the benefits of Kafka’s server-side cluster technology to make these applications highly scalable, elastic, fault-tolerant, distributed, and much more.

      This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist of the [WordCountDemo](https://github.com/apache/kafka/blob/4.0/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java) example code.

      // Serializers/deserializers (serde) for String and Long types
      final Serde<String> stringSerde = Serdes.String();
      final Serde<Long> longSerde = Serdes.Long();
      
      // Construct a `KStream` from the input topic "streams-plaintext-input", where message values
      // represent lines of text (for the sake of this example, we ignore whatever may be stored
      // in the message keys).
      KStream<String, String> textLines = builder.stream(
            "streams-plaintext-input",
            Consumed.with(stringSerde, stringSerde)
          );
      
      KTable<String, Long> wordCounts = textLines
          // Split each text line, by whitespace, into words.
          .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\W+")))
      
          // Group the text words as message keys
          .groupBy((key, value) -> value)
      
          // Count the occurrences of each word (message key).
          .count();
      
      // Store the running counts as a changelog stream to the output topic.
      wordCounts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
      

      It implements the WordCount algorithm, which computes a word occurrence histogram from the input text. However, unlike other WordCount examples you might have seen before that operate on bounded data, the WordCount demo application behaves slightly differently because it is designed to operate on an infinite, unbounded stream of data. Similar to the bounded variant, it is a stateful algorithm that tracks and updates the counts of words. However, since it must assume potentially unbounded input data, it will periodically output its current state and results while continuing to process more data because it cannot know when it has processed “all” the input data.

      As the first step, we will start Kafka (unless you already have it started) and then we will prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application.

      Step 1: Download the code

      Download the 4.0.0 release and un-tar it. Note that there are multiple downloadable Scala versions and we choose to use the recommended version (2.13) here:

      $ tar -xzf kafka_2.13-4.0.0.tgz
      $ cd kafka_2.13-4.0.0
      

      Step 2: Start the Kafka server

      Generate a Cluster UUID

      $ KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
      

      Format Log Directories

      $ bin/kafka-storage.sh format --standalone -t $KAFKA_CLUSTER_ID -c config/server.properties
      

      Start the Kafka Server

      $ bin/kafka-server-start.sh config/server.properties
      

      Step 3: Prepare input topic and start Kafka producer

      Next, we create the input topic named streams-plaintext-input and the output topic named streams-wordcount-output :

      $ bin/kafka-topics.sh --create \
          --bootstrap-server localhost:9092 \
          --replication-factor 1 \
          --partitions 1 \
          --topic streams-plaintext-input
      Created topic "streams-plaintext-input".
      

      Note: we create the output topic with compaction enabled because the output stream is a changelog stream (cf. explanation of application output below).

      $ bin/kafka-topics.sh --create \
          --bootstrap-server localhost:9092 \
          --replication-factor 1 \
          --partitions 1 \
          --topic streams-wordcount-output \
          --config cleanup.policy=compact
      Created topic "streams-wordcount-output".
      

      The created topic can be described with the same kafka-topics tool:

      $ bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
      Topic:streams-wordcount-output	PartitionCount:1	ReplicationFactor:1	Configs:cleanup.policy=compact,segment.bytes=1073741824
      	Topic: streams-wordcount-output	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
      Topic:streams-plaintext-input	PartitionCount:1	ReplicationFactor:1	Configs:segment.bytes=1073741824
      	Topic: streams-plaintext-input	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
      

      Step 4: Start the Wordcount Application

      The following command starts the WordCount demo application:

      $ bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo
      

      The demo application will read from the input topic streams-plaintext-input , perform the computations of the WordCount algorithm on each of the read messages, and continuously write its current results to the output topic streams-wordcount-output. Hence there won’t be any STDOUT output except log entries as the results are written back into in Kafka.

      Now we can start the console producer in a separate terminal to write some input data to this topic:

      $ bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
      

      and inspect the output of the WordCount demo application by reading from its output topic with the console consumer in a separate terminal:

      $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
          --topic streams-wordcount-output \
          --from-beginning \
          --property print.key=true \
          --property print.value=true \
          --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
          --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
      

      Step 5: Process some data

      Now let’s write some message with the console producer into the input topic streams-plaintext-input by entering a single line of text and then hit . This will send a new message to the input topic, where the message key is null and the message value is the string encoded text line that you just entered (in practice, input data for applications will typically be streaming continuously into Kafka, rather than being manually entered as we do in this quickstart):

      $ bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
      >all streams lead to kafka
      

      This message will be processed by the Wordcount application and the following output data will be written to the streams-wordcount-output topic and printed by the console consumer:

      $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
          --topic streams-wordcount-output \
          --from-beginning \
          --property print.key=true \
          --property print.value=true \
          --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
          --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
      
      all	    1
      streams	1
      lead	1
      to	    1
      kafka	1
      

      Here, the first column is the Kafka message key in java.lang.String format and represents a word that is being counted, and the second column is the message value in java.lang.Longformat, representing the word’s latest count.

      Now let’s continue writing one more message with the console producer into the input topic streams-plaintext-input. Enter the text line “hello kafka streams” and hit . Your terminal should look as follows:

      $ bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
      >all streams lead to kafka
      >hello kafka streams
      

      In your other terminal in which the console consumer is running, you will observe that the WordCount application wrote new output data:

      $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
          --topic streams-wordcount-output \
          --from-beginning \
          --property print.key=true \
          --property print.value=true \
          --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
          --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
      
      all	    1
      streams	1
      lead	1
      to	    1
      kafka	1
      hello	1
      kafka	2
      streams	2
      

      Here the last printed lines kafka 2 and streams 2 indicate updates to the keys kafka and streams whose counts have been incremented from 1 to 2. Whenever you write further input messages to the input topic, you will observe new messages being added to the streams-wordcount-output topic, representing the most recent word counts as computed by the WordCount application. Let’s enter one final input text line “join kafka summit” and hit in the console producer to the input topic streams-plaintext-input before we wrap up this quickstart:

      $ bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic streams-plaintext-input
      >all streams lead to kafka
      >hello kafka streams
      >join kafka summit
      

      The streams-wordcount-output topic will subsequently show the corresponding updated word counts (see last three lines):

      $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
          --topic streams-wordcount-output \
          --from-beginning \
          --property print.key=true \
          --property print.value=true \
          --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
          --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
      
      all	    1
      streams	1
      lead	1
      to	    1
      kafka	1
      hello	1
      kafka	2
      streams	2
      join	1
      kafka	3
      summit	1
      

      As one can see, outputs of the Wordcount application is actually a continuous stream of updates, where each output record (i.e. each line in the original output above) is an updated count of a single word, aka record key such as “kafka”. For multiple records with the same key, each later record is an update of the previous one.

      The two diagrams below illustrate what is essentially happening behind the scenes. The first column shows the evolution of the current state of the KTable<String, Long> that is counting word occurrences for count. The second column shows the change records that result from state updates to the KTable and that are being sent to the output Kafka topic streams-wordcount-output.

      First the text line “all streams lead to kafka” is being processed. The KTable is being built up as each new word results in a new table entry (highlighted with a green background), and a corresponding change record is sent to the downstream KStream.

      When the second text line “hello kafka streams” is processed, we observe, for the first time, that existing entries in the KTable are being updated (here: for the words “kafka” and for “streams”). And again, change records are being sent to the output topic.

      And so on (we skip the illustration of how the third line is being processed). This explains why the output topic has the contents we showed above, because it contains the full record of changes.

      Looking beyond the scope of this concrete example, what Kafka Streams is doing here is to leverage the duality between a table and a changelog stream (here: table = the KTable, changelog stream = the downstream KStream): you can publish every change of the table to a stream, and if you consume the entire changelog stream from beginning to end, you can reconstruct the contents of the table.

      Step 6: Teardown the application

      You can now stop the console consumer, the console producer, the Wordcount application, the Kafka broker in order via Ctrl-C.

      Previous Next

      9.3 - Write a streams app

      Tutorial: Write a Kafka Streams Application

      Introduction Run Demo App Tutorial: Write App Concepts Architecture Developer Guide Upgrade

      In this guide we will start from scratch on setting up your own project to write a stream processing application using Kafka Streams. It is highly recommended to read the quickstart first on how to run a Streams application written in Kafka Streams if you have not done so.

      Setting up a Maven Project

      We are going to use a Kafka Streams Maven Archetype for creating a Streams project structure with the following commands:

      $ mvn archetype:generate \
      -DarchetypeGroupId=org.apache.kafka \
      -DarchetypeArtifactId=streams-quickstart-java \
      -DarchetypeVersion=4.0.0 \
      -DgroupId=streams.examples \
      -DartifactId=streams-quickstart \
      -Dversion=0.1 \
      -Dpackage=myapps
      

      You can use a different value for groupId, artifactId and package parameters if you like. Assuming the above parameter values are used, this command will create a project structure that looks like this:

      $ tree streams-quickstart
      streams-quickstart
      |-- pom.xml
      |-- src
          |-- main
              |-- java
              |   |-- myapps
              |       |-- LineSplit.java
              |       |-- Pipe.java
              |       |-- WordCount.java
              |-- resources
                  |-- log4j.properties
      

      The pom.xml file included in the project already has the Streams dependency defined. Note, that the generated pom.xml targets Java 11.

      There are already several example programs written with Streams library under src/main/java. Since we are going to start writing such programs from scratch, we can now delete these examples:

      $ cd streams-quickstart
      $ rm src/main/java/myapps/*.java
      

      Writing a first Streams application: Pipe

      It’s coding time now! Feel free to open your favorite IDE and import this Maven project, or simply open a text editor and create a java file under src/main/java/myapps. Let’s name it Pipe.java:

      package myapps;
      
      public class Pipe {
      
          public static void main(String[] args) throws Exception {
      
          }
      }
      

      We are going to fill in the main function to write this pipe program. Note that we will not list the import statements as we go since IDEs can usually add them automatically. However if you are using a text editor you need to manually add the imports, and at the end of this section we’ll show the complete code snippet with import statement for you.

      The first step to write a Streams application is to create a java.util.Properties map to specify different Streams execution configuration values as defined in StreamsConfig. A couple of important configuration values you need to set are: StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, which specifies a list of host/port pairs to use for establishing the initial connection to the Kafka cluster, and StreamsConfig.APPLICATION_ID_CONFIG, which gives the unique identifier of your Streams application to distinguish itself with other applications talking to the same Kafka cluster:

      Properties props = new Properties();
      props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
      props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");    // assuming that the Kafka broker this application is talking to runs on local machine with port 9092
      

      In addition, you can customize other configurations in the same map, for example, default serialization and deserialization libraries for the record key-value pairs:

      props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
      props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
      

      For a full list of configurations of Kafka Streams please refer to this table.

      Next we will define the computational logic of our Streams application. In Kafka Streams this computational logic is defined as a topology of connected processor nodes. We can use a topology builder to construct such a topology,

      final StreamsBuilder builder = new StreamsBuilder();
      

      And then create a source stream from a Kafka topic named streams-plaintext-input using this topology builder:

      KStream<String, String> source = builder.stream("streams-plaintext-input");
      

      Now we get a KStream that is continuously generating records from its source Kafka topic streams-plaintext-input. The records are organized as String typed key-value pairs. The simplest thing we can do with this stream is to write it into another Kafka topic, say it’s named streams-pipe-output:

      source.to("streams-pipe-output");
      

      Note that we can also concatenate the above two lines into a single line as:

      builder.stream("streams-plaintext-input").to("streams-pipe-output");
      

      We can inspect what kind of topology is created from this builder by doing the following:

      final Topology topology = builder.build();
      

      And print its description to standard output as:

      System.out.println(topology.describe());
      

      If we just stop here, compile and run the program, it will output the following information:

      $ mvn clean package
      $ mvn exec:java -Dexec.mainClass=myapps.Pipe
      Sub-topologies:
        Sub-topology: 0
          Source: KSTREAM-SOURCE-0000000000(topics: streams-plaintext-input) --> KSTREAM-SINK-0000000001
          Sink: KSTREAM-SINK-0000000001(topic: streams-pipe-output) <-- KSTREAM-SOURCE-0000000000
      Global Stores:
        none
      

      As shown above, it illustrates that the constructed topology has two processor nodes, a source node KSTREAM-SOURCE-0000000000 and a sink node KSTREAM-SINK-0000000001. KSTREAM-SOURCE-0000000000 continuously read records from Kafka topic streams-plaintext-input and pipe them to its downstream node KSTREAM-SINK-0000000001; KSTREAM-SINK-0000000001 will write each of its received record in order to another Kafka topic streams-pipe-output (the --> and <-- arrows dictates the downstream and upstream processor nodes of this node, i.e. “children” and “parents” within the topology graph). It also illustrates that this simple topology has no global state stores associated with it (we will talk about state stores more in the following sections).

      Note that we can always describe the topology as we did above at any given point while we are building it in the code, so as a user you can interactively “try and taste” your computational logic defined in the topology until you are happy with it. Suppose we are already done with this simple topology that just pipes data from one Kafka topic to another in an endless streaming manner, we can now construct the Streams client with the two components we have just constructed above: the configuration map specified in a java.util.Properties instance and the Topology object.

      final KafkaStreams streams = new KafkaStreams(topology, props);
      

      By calling its start() function we can trigger the execution of this client. The execution won’t stop until close() is called on this client. We can, for example, add a shutdown hook with a countdown latch to capture a user interrupt and close the client upon terminating this program:

      final CountDownLatch latch = new CountDownLatch(1);
      
      // attach shutdown handler to catch control-c
      Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
          @Override
          public void run() {
              streams.close();
              latch.countDown();
          }
      });
      
      try {
          streams.start();
          latch.await();
      } catch (Throwable e) {
          System.exit(1);
      }
      System.exit(0);
      

      The complete code so far looks like this:

      package myapps;
      
      import org.apache.kafka.common.serialization.Serdes;
      import org.apache.kafka.streams.KafkaStreams;
      import org.apache.kafka.streams.StreamsBuilder;
      import org.apache.kafka.streams.StreamsConfig;
      import org.apache.kafka.streams.Topology;
      
      import java.util.Properties;
      import java.util.concurrent.CountDownLatch;
      
      public class Pipe {
      
          public static void main(String[] args) throws Exception {
              Properties props = new Properties();
              props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
              props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
              props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
              props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
      
              final StreamsBuilder builder = new StreamsBuilder();
      
              builder.stream("streams-plaintext-input").to("streams-pipe-output");
      
              final Topology topology = builder.build();
      
              final KafkaStreams streams = new KafkaStreams(topology, props);
              final CountDownLatch latch = new CountDownLatch(1);
      
              // attach shutdown handler to catch control-c
              Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
                  @Override
                  public void run() {
                      streams.close();
                      latch.countDown();
                  }
              });
      
              try {
                  streams.start();
                  latch.await();
              } catch (Throwable e) {
                  System.exit(1);
              }
              System.exit(0);
          }
      }
      

      If you already have the Kafka broker up and running at localhost:9092, and the topics streams-plaintext-input and streams-pipe-output created on that broker, you can run this code in your IDE or on the command line, using Maven:

      $ mvn clean package
      $ mvn exec:java -Dexec.mainClass=myapps.Pipe
      

      For detailed instructions on how to run a Streams application and observe its computing results, please read the Play with a Streams Application section. We will not talk about this in the rest of this section.

      Writing a second Streams application: Line Split

      We have learned how to construct a Streams client with its two key components: the StreamsConfig and Topology. Now let’s move on to add some real processing logic by augmenting the current topology. We can first create another program by first copy the existing Pipe.java class:

      $ cp src/main/java/myapps/Pipe.java src/main/java/myapps/LineSplit.java
      

      And change its class name as well as the application id config to distinguish with the original program:

      public class LineSplit {
      
          public static void main(String[] args) throws Exception {
              Properties props = new Properties();
              props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-linesplit");
              // ...
          }
      }
      

      Since each of the source stream’s record is a String typed key-value pair, let’s treat the value string as a text line and split it into words with a FlatMapValues operator:

      KStream<String, String> source = builder.stream("streams-plaintext-input");
      KStream<String, String> words = source.flatMapValues(new ValueMapper<String, Iterable<String>>() {
                  @Override
                  public Iterable<String> apply(String value) {
                      return Arrays.asList(value.split("\W+"));
                  }
              });
      

      The operator will take the source stream as its input, and generate a new stream named words by processing each record from its source stream in order and breaking its value string into a list of words, and producing each word as a new record to the output words stream. This is a stateless operator that does not need to keep track of any previously received records or processed results. Note if you are using JDK 8 you can use lambda expression and simplify the above code as:

      KStream<String, String> source = builder.stream("streams-plaintext-input");
      KStream<String, String> words = source.flatMapValues(value -> Arrays.asList(value.split("\W+")));
      

      And finally we can write the word stream back into another Kafka topic, say streams-linesplit-output. Again, these two steps can be concatenated as the following (assuming lambda expression is used):

      KStream<String, String> source = builder.stream("streams-plaintext-input");
      source.flatMapValues(value -> Arrays.asList(value.split("\W+")))
            .to("streams-linesplit-output");
      

      If we now describe this augmented topology as System.out.println(topology.describe()), we will get the following:

      $ mvn clean package
      $ mvn exec:java -Dexec.mainClass=myapps.LineSplit
      Sub-topologies:
        Sub-topology: 0
          Source: KSTREAM-SOURCE-0000000000(topics: streams-plaintext-input) --> KSTREAM-FLATMAPVALUES-0000000001
          Processor: KSTREAM-FLATMAPVALUES-0000000001(stores: []) --> KSTREAM-SINK-0000000002 <-- KSTREAM-SOURCE-0000000000
          Sink: KSTREAM-SINK-0000000002(topic: streams-linesplit-output) <-- KSTREAM-FLATMAPVALUES-0000000001
        Global Stores:
          none
      

      As we can see above, a new processor node KSTREAM-FLATMAPVALUES-0000000001 is injected into the topology between the original source and sink nodes. It takes the source node as its parent and the sink node as its child. In other words, each record fetched by the source node will first traverse to the newly added KSTREAM-FLATMAPVALUES-0000000001 node to be processed, and one or more new records will be generated as a result. They will continue traverse down to the sink node to be written back to Kafka. Note this processor node is “stateless” as it is not associated with any stores (i.e. (stores: [])).

      The complete code looks like this (assuming lambda expression is used):

      package myapps;
      
      import org.apache.kafka.common.serialization.Serdes;
      import org.apache.kafka.streams.KafkaStreams;
      import org.apache.kafka.streams.StreamsBuilder;
      import org.apache.kafka.streams.StreamsConfig;
      import org.apache.kafka.streams.Topology;
      import org.apache.kafka.streams.kstream.KStream;
      
      import java.util.Arrays;
      import java.util.Properties;
      import java.util.concurrent.CountDownLatch;
      
      public class LineSplit {
      
          public static void main(String[] args) throws Exception {
              Properties props = new Properties();
              props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-linesplit");
              props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
              props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
              props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
      
              final StreamsBuilder builder = new StreamsBuilder();
      
              KStream<String, String> source = builder.stream("streams-plaintext-input");
              source.flatMapValues(value -> Arrays.asList(value.split("\W+")))
                    .to("streams-linesplit-output");
      
              final Topology topology = builder.build();
              final KafkaStreams streams = new KafkaStreams(topology, props);
              final CountDownLatch latch = new CountDownLatch(1);
      
              // ... same as Pipe.java above
          }
      }
      

      Writing a third Streams application: Wordcount

      Let’s now take a step further to add some “stateful” computations to the topology by counting the occurrence of the words split from the source text stream. Following similar steps let’s create another program based on the LineSplit.java class:

      public class WordCount {
      
          public static void main(String[] args) throws Exception {
              Properties props = new Properties();
              props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");
              // ...
          }
      }
      

      In order to count the words we can first modify the flatMapValues operator to treat all of them as lower case (assuming lambda expression is used):

      source.flatMapValues(new ValueMapper<String, Iterable<String>>() {
          @Override
          public Iterable<String> apply(String value) {
              return Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\W+"));
          }
      });
      

      In order to do the counting aggregation we have to first specify that we want to key the stream on the value string, i.e. the lower cased word, with a groupBy operator. This operator generate a new grouped stream, which can then be aggregated by a count operator, which generates a running count on each of the grouped keys:

      KTable<String, Long> counts =
      source.flatMapValues(new ValueMapper<String, Iterable<String>>() {
                  @Override
                  public Iterable<String> apply(String value) {
                      return Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\W+"));
                  }
              })
            .groupBy(new KeyValueMapper<String, String, String>() {
                 @Override
                 public String apply(String key, String value) {
                     return value;
                 }
              })
            // Materialize the result into a KeyValueStore named "counts-store".
            // The Materialized store is always of type <Bytes, byte[]> as this is the format of the inner most store.
            .count(Materialized.<String, Long, KeyValueStore<Bytes, byte[]>> as("counts-store"));
      

      Note that the count operator has a Materialized parameter that specifies that the running count should be stored in a state store named counts-store. This counts-store store can be queried in real-time, with details described in the Developer Manual.

      We can also write the counts KTable’s changelog stream back into another Kafka topic, say streams-wordcount-output. Because the result is a changelog stream, the output topic streams-wordcount-output should be configured with log compaction enabled. Note that this time the value type is no longer String but Long, so the default serialization classes are not viable for writing it to Kafka anymore. We need to provide overridden serialization methods for Long types, otherwise a runtime exception will be thrown:

      counts.toStream().to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
      

      Note that in order to read the changelog stream from topic streams-wordcount-output, one needs to set the value deserialization as org.apache.kafka.common.serialization.LongDeserializer. Details of this can be found in the Play with a Streams Application section. Assuming lambda expression from JDK 8 can be used, the above code can be simplified as:

      KStream<String, String> source = builder.stream("streams-plaintext-input");
      source.flatMapValues(value -> Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\W+")))
            .groupBy((key, value) -> value)
            .count(Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("counts-store"))
            .toStream()
            .to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
      

      If we again describe this augmented topology as System.out.println(topology.describe()), we will get the following:

      $ mvn clean package
      $ mvn exec:java -Dexec.mainClass=myapps.WordCount
      Sub-topologies:
        Sub-topology: 0
          Source: KSTREAM-SOURCE-0000000000(topics: streams-plaintext-input) --> KSTREAM-FLATMAPVALUES-0000000001
          Processor: KSTREAM-FLATMAPVALUES-0000000001(stores: []) --> KSTREAM-KEY-SELECT-0000000002 <-- KSTREAM-SOURCE-0000000000
          Processor: KSTREAM-KEY-SELECT-0000000002(stores: []) --> KSTREAM-FILTER-0000000005 <-- KSTREAM-FLATMAPVALUES-0000000001
          Processor: KSTREAM-FILTER-0000000005(stores: []) --> KSTREAM-SINK-0000000004 <-- KSTREAM-KEY-SELECT-0000000002
          Sink: KSTREAM-SINK-0000000004(topic: counts-store-repartition) <-- KSTREAM-FILTER-0000000005
        Sub-topology: 1
          Source: KSTREAM-SOURCE-0000000006(topics: counts-store-repartition) --> KSTREAM-AGGREGATE-0000000003
          Processor: KSTREAM-AGGREGATE-0000000003(stores: [counts-store]) --> KTABLE-TOSTREAM-0000000007 <-- KSTREAM-SOURCE-0000000006
          Processor: KTABLE-TOSTREAM-0000000007(stores: []) --> KSTREAM-SINK-0000000008 <-- KSTREAM-AGGREGATE-0000000003
          Sink: KSTREAM-SINK-0000000008(topic: streams-wordcount-output) <-- KTABLE-TOSTREAM-0000000007
      Global Stores:
        none
      

      As we can see above, the topology now contains two disconnected sub-topologies. The first sub-topology’s sink node KSTREAM-SINK-0000000004 will write to a repartition topic counts-store-repartition, which will be read by the second sub-topology’s source node KSTREAM-SOURCE-0000000006. The repartition topic is used to “shuffle” the source stream by its aggregation key, which is in this case the value string. In addition, inside the first sub-topology a stateless KSTREAM-FILTER-0000000005 node is injected between the grouping KSTREAM-KEY-SELECT-0000000002 node and the sink node to filter out any intermediate record whose aggregate key is empty.

      In the second sub-topology, the aggregation node KSTREAM-AGGREGATE-0000000003 is associated with a state store named counts-store (the name is specified by the user in the count operator). Upon receiving each record from its upcoming stream source node, the aggregation processor will first query its associated counts-store store to get the current count for that key, augment by one, and then write the new count back to the store. Each updated count for the key will also be piped downstream to the KTABLE-TOSTREAM-0000000007 node, which interpret this update stream as a record stream before further piping to the sink node KSTREAM-SINK-0000000008 for writing back to Kafka.

      The complete code looks like this (assuming lambda expression is used):

      package myapps;
      
      import org.apache.kafka.common.serialization.Serdes;
      import org.apache.kafka.common.utils.Bytes;
      import org.apache.kafka.streams.KafkaStreams;
      import org.apache.kafka.streams.StreamsBuilder;
      import org.apache.kafka.streams.StreamsConfig;
      import org.apache.kafka.streams.Topology;
      import org.apache.kafka.streams.kstream.KStream;
      import org.apache.kafka.streams.kstream.Materialized;
      import org.apache.kafka.streams.kstream.Produced;
      import org.apache.kafka.streams.state.KeyValueStore;
      
      import java.util.Arrays;
      import java.util.Locale;
      import java.util.Properties;
      import java.util.concurrent.CountDownLatch;
      
      public class WordCount {
      
          public static void main(String[] args) throws Exception {
              Properties props = new Properties();
              props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");
              props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
              props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
              props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
      
              final StreamsBuilder builder = new StreamsBuilder();
      
              KStream<String, String> source = builder.stream("streams-plaintext-input");
              source.flatMapValues(value -> Arrays.asList(value.toLowerCase(Locale.getDefault()).split("\W+")))
                    .groupBy((key, value) -> value)
                    .count(Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("counts-store"))
                    .toStream()
                    .to("streams-wordcount-output", Produced.with(Serdes.String(), Serdes.Long()));
      
              final Topology topology = builder.build();
              final KafkaStreams streams = new KafkaStreams(topology, props);
              final CountDownLatch latch = new CountDownLatch(1);
      
              // ... same as Pipe.java above
          }
      }
      

      Previous Next

      9.4 - Core Concepts

      Core Concepts

      Introduction Run Demo App Tutorial: Write App Concepts Architecture Developer Guide Upgrade

      Kafka Streams is a client library for processing and analyzing data stored in Kafka. It builds upon important stream processing concepts such as properly distinguishing between event time and processing time, windowing support, and simple yet efficient management and real-time querying of application state.

      Kafka Streams has a low barrier to entry : You can quickly write and run a small-scale proof-of-concept on a single machine; and you only need to run additional instances of your application on multiple machines to scale up to high-volume production workloads. Kafka Streams transparently handles the load balancing of multiple instances of the same application by leveraging Kafka’s parallelism model.

      Some highlights of Kafka Streams:

      • Designed as a simple and lightweight client library , which can be easily embedded in any Java application and integrated with any existing packaging, deployment and operational tools that users have for their streaming applications.
      • Has no external dependencies on systems other than Apache Kafka itself as the internal messaging layer; notably, it uses Kafka’s partitioning model to horizontally scale processing while maintaining strong ordering guarantees.
      • Supports fault-tolerant local state , which enables very fast and efficient stateful operations like windowed joins and aggregations.
      • Supports exactly-once processing semantics to guarantee that each record will be processed once and only once even when there is a failure on either Streams clients or Kafka brokers in the middle of processing.
      • Employs one-record-at-a-time processing to achieve millisecond processing latency, and supports event-time based windowing operations with out-of-order arrival of records.
      • Offers necessary stream processing primitives, along with a high-level Streams DSL and a low-level Processor API.

      We first summarize the key concepts of Kafka Streams.

      Stream Processing Topology

      • A stream is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a data record is defined as a key-value pair.
      • A stream processing application is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more processor topologies , where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).
      • A stream processor is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors.

      There are two special processors in the topology:

      • Source Processor : A source processor is a special type of stream processor that does not have any upstream processors. It produces an input stream to its topology from one or multiple Kafka topics by consuming records from these topics and forwarding them to its down-stream processors.
      • Sink Processor : A sink processor is a special type of stream processor that does not have down-stream processors. It sends any received records from its up-stream processors to a specified Kafka topic.

      Note that in normal processor nodes other remote systems can also be accessed while processing the current record. Therefore the processed results can either be streamed back into Kafka or written to an external system.

      Kafka Streams offers two ways to define the stream processing topology: the Kafka Streams DSL provides the most common data transformation operations such as map, filter, join and aggregations out of the box; the lower-level Processor API allows developers define and connect custom processors as well as to interact with state stores.

      A processor topology is merely a logical abstraction for your stream processing code. At runtime, the logical topology is instantiated and replicated inside the application for parallel processing (see Stream Partitions and Tasks for details).

      Time

      A critical aspect in stream processing is the notion of time , and how it is modeled and integrated. For example, some operations such as windowing are defined based on time boundaries.

      Common notions of time in streams are:

      • Event time - The point in time when an event or data record occurred, i.e. was originally created “at the source”. Example: If the event is a geo-location change reported by a GPS sensor in a car, then the associated event-time would be the time when the GPS sensor captured the location change.
      • Processing time - The point in time when the event or data record happens to be processed by the stream processing application, i.e. when the record is being consumed. The processing time may be milliseconds, hours, or days etc. later than the original event time. Example: Imagine an analytics application that reads and processes the geo-location data reported from car sensors to present it to a fleet management dashboard. Here, processing-time in the analytics application might be milliseconds or seconds (e.g. for real-time pipelines based on Apache Kafka and Kafka Streams) or hours (e.g. for batch pipelines based on Apache Hadoop or Apache Spark) after event-time.
      • Ingestion time - The point in time when an event or data record is stored in a topic partition by a Kafka broker. The difference to event time is that this ingestion timestamp is generated when the record is appended to the target topic by the Kafka broker, not when the record is created “at the source”. The difference to processing time is that processing time is when the stream processing application processes the record. For example, if a record is never processed, there is no notion of processing time for it, but it still has an ingestion time.

      The choice between event-time and ingestion-time is actually done through the configuration of Kafka (not Kafka Streams): From Kafka 0.10.x onwards, timestamps are automatically embedded into Kafka messages. Depending on Kafka’s configuration these timestamps represent event-time or ingestion-time. The respective Kafka configuration setting can be specified on the broker level or per topic. The default timestamp extractor in Kafka Streams will retrieve these embedded timestamps as-is. Hence, the effective time semantics of your application depend on the effective Kafka configuration for these embedded timestamps.

      Kafka Streams assigns a timestamp to every data record via the TimestampExtractor interface. These per-record timestamps describe the progress of a stream with regards to time and are leveraged by time-dependent operations such as window operations. As a result, this time will only advance when a new record arrives at the processor. We call this data-driven time the stream time of the application to differentiate with the wall-clock time when this application is actually executing. Concrete implementations of the TimestampExtractor interface will then provide different semantics to the stream time definition. For example retrieving or computing timestamps based on the actual contents of data records such as an embedded timestamp field to provide event time semantics, and returning the current wall-clock time thereby yield processing time semantics to stream time. Developers can thus enforce different notions of time depending on their business needs.

      Finally, whenever a Kafka Streams application writes records to Kafka, then it will also assign timestamps to these new records. The way the timestamps are assigned depends on the context:

      • When new output records are generated via processing some input record, for example, context.forward() triggered in the process() function call, output record timestamps are inherited from input record timestamps directly.
      • When new output records are generated via periodic functions such as Punctuator#punctuate(), the output record timestamp is defined as the current internal time (obtained through context.timestamp()) of the stream task.
      • For aggregations, the timestamp of a result update record will be the maximum timestamp of all input records contributing to the result.

      You can change the default behavior in the Processor API by assigning timestamps to output records explicitly when calling #forward().

      For aggregations and joins, timestamps are computed by using the following rules.

      • For joins (stream-stream, table-table) that have left and right input records, the timestamp of the output record is assigned max(left.ts, right.ts).
      • For stream-table joins, the output record is assigned the timestamp from the stream record.
      • For aggregations, Kafka Streams also computes the max timestamp over all records, per key, either globally (for non-windowed) or per-window.
      • For stateless operations, the input record timestamp is passed through. For flatMap and siblings that emit multiple records, all output records inherit the timestamp from the corresponding input record.

      Duality of Streams and Tables

      When implementing stream processing use cases in practice, you typically need both streams and also databases. An example use case that is very common in practice is an e-commerce application that enriches an incoming stream of customer transactions with the latest customer information from a database table. In other words, streams are everywhere, but databases are everywhere, too.

      Any stream processing technology must therefore provide first-class support for streams and tables. Kafka’s Streams API provides such functionality through its core abstractions for streams and tables, which we will talk about in a minute. Now, an interesting observation is that there is actually a close relationship between streams and tables , the so-called stream-table duality. And Kafka exploits this duality in many ways: for example, to make your applications elastic, to support fault-tolerant stateful processing, or to run interactive queries against your application’s latest processing results. And, beyond its internal usage, the Kafka Streams API also allows developers to exploit this duality in their own applications.

      Before we discuss concepts such as aggregations in Kafka Streams, we must first introduce tables in more detail, and talk about the aforementioned stream-table duality. Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream. Kafka’s log compaction feature, for example, exploits this duality.

      A simple form of a table is a collection of key-value pairs, also called a map or associative array. Such a table may look as follows:

      The stream-table duality describes the close relationship between streams and tables.

      • Stream as Table : A stream can be considered a changelog of a table, where each data record in the stream captures a state change of the table. A stream is thus a table in disguise, and it can be easily turned into a “real” table by replaying the changelog from beginning to end to reconstruct the table. Similarly, in a more general analogy, aggregating data records in a stream - such as computing the total number of pageviews by user from a stream of pageview events - will return a table (here with the key and the value being the user and its corresponding pageview count, respectively).
      • Table as Stream : A table can be considered a snapshot, at a point in time, of the latest value for each key in a stream (a stream’s data records are key-value pairs). A table is thus a stream in disguise, and it can be easily turned into a “real” stream by iterating over each key-value entry in the table.

      Let’s illustrate this with an example. Imagine a table that tracks the total number of pageviews by user (first column of diagram below). Over time, whenever a new pageview event is processed, the state of the table is updated accordingly. Here, the state changes between different points in time - and different revisions of the table - can be represented as a changelog stream (second column).

      Interestingly, because of the stream-table duality, the same stream can be used to reconstruct the original table (third column):

      The same mechanism is used, for example, to replicate databases via change data capture (CDC) and, within Kafka Streams, to replicate its so-called state stores across machines for fault-tolerance. The stream-table duality is such an important concept that Kafka Streams models it explicitly via the KStream, KTable, and GlobalKTable interfaces.

      Aggregations

      An aggregation operation takes one input stream or table, and yields a new table by combining multiple input records into a single output record. Examples of aggregations are computing counts or sum.

      In the Kafka Streams DSL, an input stream of an aggregation can be a KStream or a KTable, but the output stream will always be a KTable. This allows Kafka Streams to update an aggregate value upon the out-of-order arrival of further records after the value was produced and emitted. When such out-of-order arrival happens, the aggregating KStream or KTable emits a new aggregate value. Because the output is a KTable, the new value is considered to overwrite the old value with the same key in subsequent processing steps.

      Windowing

      Windowing lets you control how to group records that have the same key for stateful operations such as aggregations or joins into so-called windows. Windows are tracked per record key.

      Windowing operations are available in the Kafka Streams DSL. When working with windows, you can specify a grace period for the window. This grace period controls how long Kafka Streams will wait for out-of-order data records for a given window. If a record arrives after the grace period of a window has passed, the record is discarded and will not be processed in that window. Specifically, a record is discarded if its timestamp dictates it belongs to a window, but the current stream time is greater than the end of the window plus the grace period.

      Out-of-order records are always possible in the real world and should be properly accounted for in your applications. It depends on the effective time semantics how out-of-order records are handled. In the case of processing-time, the semantics are “when the record is being processed”, which means that the notion of out-of-order records is not applicable as, by definition, no record can be out-of-order. Hence, out-of-order records can only be considered as such for event-time. In both cases, Kafka Streams is able to properly handle out-of-order records.

      States

      Some stream processing applications don’t require state, which means the processing of a message is independent from the processing of all other messages. However, being able to maintain state opens up many possibilities for sophisticated stream processing applications: you can join input streams, or group and aggregate data records. Many such stateful operators are provided by the Kafka Streams DSL.

      Kafka Streams provides so-called state stores , which can be used by stream processing applications to store and query data. This is an important capability when implementing stateful operations. Every task in Kafka Streams embeds one or more state stores that can be accessed via APIs to store and query data required for processing. These state stores can either be a persistent key-value store, an in-memory hashmap, or another convenient data structure. Kafka Streams offers fault-tolerance and automatic recovery for local state stores.

      Kafka Streams allows direct read-only queries of the state stores by methods, threads, processes or applications external to the stream processing application that created the state stores. This is provided through a feature called Interactive Queries. All stores are named and Interactive Queries exposes only the read operations of the underlying implementation.

      Processing Guarantees

      In stream processing, one of the most frequently asked question is “does my stream processing system guarantee that each record is processed once and only once, even if some failures are encountered in the middle of processing?” Failing to guarantee exactly-once stream processing is a deal-breaker for many applications that cannot tolerate any data-loss or data duplicates, and in that case a batch-oriented framework is usually used in addition to the stream processing pipeline, known as the Lambda Architecture. Prior to 0.11.0.0, Kafka only provides at-least-once delivery guarantees and hence any stream processing systems that leverage it as the backend storage could not guarantee end-to-end exactly-once semantics. In fact, even for those stream processing systems that claim to support exactly-once processing, as long as they are reading from / writing to Kafka as the source / sink, their applications cannot actually guarantee that no duplicates will be generated throughout the pipeline.
      Since the 0.11.0.0 release, Kafka has added support to allow its producers to send messages to different topic partitions in a transactional and idempotent manner, and Kafka Streams has hence added the end-to-end exactly-once processing semantics by leveraging these features. More specifically, it guarantees that for any record read from the source Kafka topics, its processing results will be reflected exactly once in the output Kafka topic as well as in the state stores for stateful operations. Note the key difference between Kafka Streams end-to-end exactly-once guarantee with other stream processing frameworks’ claimed guarantees is that Kafka Streams tightly integrates with the underlying Kafka storage system and ensure that commits on the input topic offsets, updates on the state stores, and writes to the output topics will be completed atomically instead of treating Kafka as an external system that may have side-effects. For more information on how this is done inside Kafka Streams, see KIP-129.
      As of the 2.6.0 release, Kafka Streams supports an improved implementation of exactly-once processing, named “exactly-once v2”, which requires broker version 2.5.0 or newer. This implementation is more efficient, because it reduces client and broker resource utilization, like client threads and used network connections, and it enables higher throughput and improved scalability. As of the 3.0.0 release, the first version of exactly-once has been deprecated. Users are encouraged to use exactly-once v2 for exactly-once processing from now on, and prepare by upgrading their brokers if necessary. For more information on how this is done inside the brokers and Kafka Streams, see KIP-447.
      To enable exactly-once semantics when running Kafka Streams applications, set the processing.guarantee config value (default value is at_least_once) to StreamsConfig.EXACTLY_ONCE_V2 (requires brokers version 2.5 or newer). For more information, see the Kafka Streams Configs section.

      Out-of-Order Handling

      Besides the guarantee that each record will be processed exactly-once, another issue that many stream processing applications will face is how to handle out-of-order data that may impact their business logic. In Kafka Streams, there are two causes that could potentially result in out-of-order data arrivals with respect to their timestamps:

      • Within a topic-partition, a record’s timestamp may not be monotonically increasing along with their offsets. Since Kafka Streams will always try to process records within a topic-partition to follow the offset order, it can cause records with larger timestamps (but smaller offsets) to be processed earlier than records with smaller timestamps (but larger offsets) in the same topic-partition.
      • Within a stream task that may be processing multiple topic-partitions, if users configure the application to not wait for all partitions to contain some buffered data and pick from the partition with the smallest timestamp to process the next record, then later on when some records are fetched for other topic-partitions, their timestamps may be smaller than those processed records fetched from another topic-partition.

      For stateless operations, out-of-order data will not impact processing logic since only one record is considered at a time, without looking into the history of past processed records; for stateful operations such as aggregations and joins, however, out-of-order data could cause the processing logic to be incorrect. If users want to handle such out-of-order data, generally they need to allow their applications to wait for longer time while bookkeeping their states during the wait time, i.e. making trade-off decisions between latency, cost, and correctness. In Kafka Streams specifically, users can configure their window operators for windowed aggregations to achieve such trade-offs (details can be found in Developer Guide). As for Joins, users may use versioned state stores to address concerns with out-of-order data, but out-of-order data will not be handled by default:

      • For Stream-Stream joins, all three types (inner, outer, left) handle out-of-order records correctly.
      • For Stream-Table joins, if not using versioned stores, then out-of-order records are not handled (i.e., Streams applications don’t check for out-of-order records and just process all records in offset order), and hence it may produce unpredictable results. With versioned stores, stream-side out-of-order data will be properly handled by performing a timestamp-based lookup in the table. Table-side out-of-order data is still not handled.
      • For Table-Table joins, if not using versioned stores, then out-of-order records are not handled (i.e., Streams applications don’t check for out-of-order records and just process all records in offset order). However, the join result is a changelog stream and hence will be eventually consistent. With versioned stores, table-table join semantics change from offset-based semantics to timestamp-based semantics and out-of-order records are handled accordingly.

      Previous Next

      9.5 - Architecture

      Architecture

      Introduction Run Demo App Tutorial: Write App Concepts Architecture Developer Guide Upgrade

      Kafka Streams simplifies application development by building on the Kafka producer and consumer libraries and leveraging the native capabilities of Kafka to offer data parallelism, distributed coordination, fault tolerance, and operational simplicity. In this section, we describe how Kafka Streams works underneath the covers.

      The picture below shows the anatomy of an application that uses the Kafka Streams library. Let’s walk through some details.

      Stream Partitions and Tasks

      The messaging layer of Kafka partitions data for storing and transporting it. Kafka Streams partitions data for processing it. In both cases, this partitioning is what enables data locality, elasticity, scalability, high performance, and fault tolerance. Kafka Streams uses the concepts of partitions and tasks as logical units of its parallelism model based on Kafka topic partitions. There are close links between Kafka Streams and Kafka in the context of parallelism:

      • Each stream partition is a totally ordered sequence of data records and maps to a Kafka topic partition.
      • A data record in the stream maps to a Kafka message from that topic.
      • The keys of data records determine the partitioning of data in both Kafka and Kafka Streams, i.e., how data is routed to specific partitions within topics.

      An application’s processor topology is scaled by breaking it into multiple tasks. More specifically, Kafka Streams creates a fixed number of tasks based on the input stream partitions for the application, with each task assigned a list of partitions from the input streams (i.e., Kafka topics). The assignment of partitions to tasks never changes so that each task is a fixed unit of parallelism of the application. Tasks can then instantiate their own processor topology based on the assigned partitions; they also maintain a buffer for each of its assigned partitions and process messages one-at-a-time from these record buffers. As a result stream tasks can be processed independently and in parallel without manual intervention.

      Slightly simplified, the maximum parallelism at which your application may run is bounded by the maximum number of stream tasks, which itself is determined by maximum number of partitions of the input topic(s) the application is reading from. For example, if your input topic has 5 partitions, then you can run up to 5 applications instances. These instances will collaboratively process the topic’s data. If you run a larger number of app instances than partitions of the input topic, the “excess” app instances will launch but remain idle; however, if one of the busy instances goes down, one of the idle instances will resume the former’s work.

      It is important to understand that Kafka Streams is not a resource manager, but a library that “runs” anywhere its stream processing application runs. Multiple instances of the application are executed either on the same machine, or spread across multiple machines and tasks can be distributed automatically by the library to those running application instances. The assignment of partitions to tasks never changes; if an application instance fails, all its assigned tasks will be automatically restarted on other instances and continue to consume from the same stream partitions.

      NOTE: Topic partitions are assigned to tasks, and tasks are assigned to all threads over all instances, in a best-effort attempt to trade off load-balancing and stickiness of stateful tasks. For this assignment, Kafka Streams uses the StreamsPartitionAssignor class and doesn’t let you change to a different assignor. If you try to use a different assignor, Kafka Streams ignores it.

      The following diagram shows two tasks each assigned with one partition of the input streams.

      Threading Model

      Kafka Streams allows the user to configure the number of threads that the library can use to parallelize processing within an application instance. Each thread can execute one or more tasks with their processor topologies independently. For example, the following diagram shows one stream thread running two stream tasks.

      Starting more stream threads or more instances of the application merely amounts to replicating the topology and having it process a different subset of Kafka partitions, effectively parallelizing processing. It is worth noting that there is no shared state amongst the threads, so no inter-thread coordination is necessary. This makes it very simple to run topologies in parallel across the application instances and threads. The assignment of Kafka topic partitions amongst the various stream threads is transparently handled by Kafka Streams leveraging Kafka’s coordination functionality.

      As we described above, scaling your stream processing application with Kafka Streams is easy: you merely need to start additional instances of your application, and Kafka Streams takes care of distributing partitions amongst tasks that run in the application instances. You can start as many threads of the application as there are input Kafka topic partitions so that, across all running instances of an application, every thread (or rather, the tasks it runs) has at least one input partition to process.

      As of Kafka 2.8 you can scale stream threads much in the same way you can scale your Kafka Stream clients. Simply add or remove stream threads and Kafka Streams will take care of redistributing the partitions. You may also add threads to replace stream threads that have died removing the need to restart clients to recover the number of thread running.

      Local State Stores

      Kafka Streams provides so-called state stores , which can be used by stream processing applications to store and query data, which is an important capability when implementing stateful operations. The Kafka Streams DSL, for example, automatically creates and manages such state stores when you are calling stateful operators such as join() or aggregate(), or when you are windowing a stream.

      Every stream task in a Kafka Streams application may embed one or more local state stores that can be accessed via APIs to store and query data required for processing. Kafka Streams offers fault-tolerance and automatic recovery for such local state stores.

      The following diagram shows two stream tasks with their dedicated local state stores.

      Fault Tolerance

      Kafka Streams builds on fault-tolerance capabilities integrated natively within Kafka. Kafka partitions are highly available and replicated; so when stream data is persisted to Kafka it is available even if the application fails and needs to re-process it. Tasks in Kafka Streams leverage the fault-tolerance capability offered by the Kafka consumer client to handle failures. If a task runs on a machine that fails, Kafka Streams automatically restarts the task in one of the remaining running instances of the application.

      In addition, Kafka Streams makes sure that the local state stores are robust to failures, too. For each state store, it maintains a replicated changelog Kafka topic in which it tracks any state updates. These changelog topics are partitioned as well so that each local state store instance, and hence the task accessing the store, has its own dedicated changelog topic partition. Log compaction is enabled on the changelog topics so that old data can be purged safely to prevent the topics from growing indefinitely. If tasks run on a machine that fails and are restarted on another machine, Kafka Streams guarantees to restore their associated state stores to the content before the failure by replaying the corresponding changelog topics prior to resuming the processing on the newly started tasks. As a result, failure handling is completely transparent to the end user.

      Note that the cost of task (re)initialization typically depends primarily on the time for restoring the state by replaying the state stores’ associated changelog topics. To minimize this restoration time, users can configure their applications to have standby replicas of local states (i.e. fully replicated copies of the state). When a task migration happens, Kafka Streams will assign a task to an application instance where such a standby replica already exists in order to minimize the task (re)initialization cost. See num.standby.replicas in the Kafka Streams Configs section. Starting in 2.6, Kafka Streams will guarantee that a task is only ever assigned to an instance with a fully caught-up local copy of the state, if such an instance exists. Standby tasks will increase the likelihood that a caught-up instance exists in the case of a failure.

      You can also configure standby replicas with rack awareness. When configured, Kafka Streams will attempt to distribute a standby task on a different “rack” than the active one, thus having a faster recovery time when the rack of the active tasks fails. See rack.aware.assignment.tags in the Kafka Streams Developer Guide section.

      There is also a client config client.rack which can set the rack for a Kafka consumer. If brokers also have their rack set via broker.rack, then rack aware task assignment can be enabled via rack.aware.assignment.strategy (cf. Kafka Streams Developer Guide) to compute a task assignment which can reduce cross rack traffic by trying to assign tasks to clients with the same rack. Note that client.rack can also be used to distribute standby tasks to different racks from the active ones, which has a similar functionality as rack.aware.assignment.tags. Currently, rack.aware.assignment.tag takes precedence in distributing standby tasks which means if both configs present, rack.aware.assignment.tag will be used for distributing standby tasks on different racks from the active ones because it can configure more tag keys.

      Previous Next

      9.6 - Upgrade Guide

      Upgrade Guide and API Changes

      Introduction Run Demo App Tutorial: Write App Concepts Architecture Developer Guide Upgrade

      Upgrading from any older version to 4.0.0 is possible: if upgrading from 3.4 or below, you will need to do two rolling bounces, where during the first rolling bounce phase you set the config upgrade.from="older version" (possible values are "0.10.0" - "3.4") and during the second you remove it. This is required to safely handle 3 changes. The first is introduction of the new cooperative rebalancing protocol of the embedded consumer. The second is a change in foreign-key join serialization format. Note that you will remain using the old eager rebalancing protocol if you skip or delay the second rolling bounce, but you can safely switch over to cooperative at any time once the entire group is on 2.4+ by removing the config value and bouncing. For more details please refer to KIP-429. The third is a change in the serialization format for an internal repartition topic. For more details, please refer to KIP-904:

      • prepare your application instances for a rolling bounce and make sure that config upgrade.from is set to the version from which it is being upgrade.
      • bounce each instance of your application once
      • prepare your newly deployed 4.0.0 application instances for a second round of rolling bounces; make sure to remove the value for config upgrade.from
      • bounce each instance of your application once more to complete the upgrade

      As an alternative, an offline upgrade is also possible. Upgrading from any versions as old as 0.10.0.x to 4.0.0 in offline mode require the following steps:

      • stop all old (e.g., 0.10.0.x) application instances
      • update your code and swap old code and jar file with new code and new jar file
      • restart all new (4.0.0) application instances

      Note: The cooperative rebalancing protocol has been the default since 2.4, but we have continued to support the eager rebalancing protocol to provide users an upgrade path. This support will be dropped in a future release, so any users still on the eager protocol should prepare to finish upgrading their applications to the cooperative protocol in version 3.1. This only affects users who are still on a version older than 2.4, and users who have upgraded already but have not yet removed the upgrade.from config that they set when upgrading from a version below 2.4. Users fitting into the latter case will simply need to unset this config when upgrading beyond 3.1, while users in the former case will need to follow a slightly different upgrade path if they attempt to upgrade from 2.3 or below to a version above 3.1. Those applications will need to go through a bridge release, by first upgrading to a version between 2.4 - 3.1 and setting the upgrade.from config, then removing that config and upgrading to the final version above 3.1. See KAFKA-8575 for more details.

      For a table that shows Streams API compatibility with Kafka broker versions, see Broker Compatibility.

      Notable compatibility changes in past releases

      Starting in version 4.0.0, Kafka Streams will only be compatible when running against brokers on version 2.1 or higher. Additionally, exactly-once semantics (EOS) will require brokers to be at least version 2.5.

      Downgrading from 3.5.x or newer version to 3.4.x or older version needs special attention: Since 3.5.0 release, Kafka Streams uses a new serialization format for repartition topics. This means that older versions of Kafka Streams would not be able to recognize the bytes written by newer versions, and hence it is harder to downgrade Kafka Streams with version 3.5.0 or newer to older versions in-flight. For more details, please refer to KIP-904. For a downgrade, first switch the config from "upgrade.from" to the version you are downgrading to. This disables writing of the new serialization format in your application. It’s important to wait in this state long enough to make sure that the application has finished processing any “in-flight” messages written into the repartition topics in the new serialization format. Afterwards, you can downgrade your application to a pre-3.5.x version.

      Downgrading from 3.0.x or newer version to 2.8.x or older version needs special attention: Since 3.0.0 release, Kafka Streams uses a newer RocksDB version whose on-disk format changed. This means that old versioned RocksDB would not be able to recognize the bytes written by that newer versioned RocksDB, and hence it is harder to downgrade Kafka Streams with version 3.0.0 or newer to older versions in-flight. Users need to wipe out the local RocksDB state stores written by the new versioned Kafka Streams before swapping in the older versioned Kafka Streams bytecode, which would then restore the state stores with the old on-disk format from the changelogs.

      Kafka Streams does not support running multiple instances of the same application as different processes on the same physical state directory. Starting in 2.8.0 (as well as 2.7.1 and 2.6.2), this restriction will be enforced. If you wish to run more than one instance of Kafka Streams, you must configure them with different values for state.dir.

      Starting in Kafka Streams 2.6.x, a new processing mode is available, named EOS version 2. This can be configured by setting "processing.guarantee" to "exactly_once_v2" for application versions 3.0+, or setting it to "exactly_once_beta" for versions between 2.6 and 2.8. To use this new feature, your brokers must be on version 2.5.x or newer. If you want to upgrade your EOS application from an older version and enable this feature in version 3.0+, you first need to upgrade your application to version 3.0.x, staying on "exactly_once", and then do second round of rolling bounces to switch to "exactly_once_v2". If you are upgrading an EOS application from an older (pre-2.6) version to a version between 2.6 and 2.8, follow these same steps but with the config "exactly_once_beta" instead. No special steps are required to upgrade an application using "exactly_once_beta" from version 2.6+ to 3.0 or higher: you can just change the config from "exactly_once_beta" to "exactly_once_v2" during the rolling upgrade. For a downgrade, do the reverse: first switch the config from "exactly_once_v2" to "exactly_once" to disable the feature in your 2.6.x application. Afterward, you can downgrade your application to a pre-2.6.x version.

      Since 2.6.0 release, Kafka Streams depends on a RocksDB version that requires MacOS 10.14 or higher.

      To run a Kafka Streams application version 2.2.1, 2.3.0, or higher a broker version 0.11.0 or higher is required and the on-disk message format must be 0.11 or higher. Brokers must be on version 0.10.1 or higher to run a Kafka Streams application version 0.10.1 to 2.2.0. Additionally, on-disk message format must be 0.10 or higher to run a Kafka Streams application version 1.0 to 2.2.0. For Kafka Streams 0.10.0, broker version 0.10.0 or higher is required.

      In deprecated KStreamBuilder class, when a KTable is created from a source topic via KStreamBuilder.table(), its materialized state store will reuse the source topic as its changelog topic for restoring, and will disable logging to avoid appending new updates to the source topic; in the StreamsBuilder class introduced in 1.0, this behavior was changed accidentally: we still reuse the source topic as the changelog topic for restoring, but will also create a separate changelog topic to append the update records from source topic to. In the 2.0 release, we have fixed this issue and now users can choose whether or not to reuse the source topic based on the StreamsConfig#TOPOLOGY_OPTIMIZATION_CONFIG: if you are upgrading from the old KStreamBuilder class and hence you need to change your code to use the new StreamsBuilder, you should set this config value to StreamsConfig#OPTIMIZE to continue reusing the source topic; if you are upgrading from 1.0 or 1.1 where you are already using StreamsBuilder and hence have already created a separate changelog topic, you should set this config value to StreamsConfig#NO_OPTIMIZATION when upgrading to 4.0.0 in order to use that changelog topic for restoring the state store. More details about the new config StreamsConfig#TOPOLOGY_OPTIMIZATION_CONFIG can be found in KIP-295.

      Streams API changes in 4.0.0

      In this release, eos-v1 (Exactly Once Semantics version 1) is no longer supported. To use eos-v2, brokers must be running version 2.5 or later. Additionally, all deprecated methods, classes, APIs, and config parameters up to and including AK 3.5 release have been removed. A few important ones are listed below. The full list can be found in KAFKA-12822.

      In this release the ClientInstanceIds instance stores the global consumerUuid for the KIP-714 id with a key of global stream-thread name appended with "-global-consumer" where before it was only the global stream-thread name.

      In this release two configs default.deserialization.exception.handler and default.production.exception.handler are deprecated, as they don’t have any overwrites, which is described in KIP-1056 You can refer to new configs via deserialization.exception.handler and production.exception.handler.

      In previous release, a new version of the Processor API was introduced and the old Processor API was incrementally replaced and deprecated. KIP-1070 follow this path by deprecating MockProcessorContext, Transformer, TransformerSupplier, ValueTransformer, and ValueTransformerSupplier.

      Previously, the ProductionExceptionHandler was not invoked on a (retriable) TimeoutException. With Kafka Streams 4.0, the handler is called, and the default handler would return RETRY to not change existing behavior. However, a custom handler can now decide to break the infinite retry loop by returning either CONTINUE or FAIL (KIP-1065).

      In this release, Kafka Streams metrics can be collected broker side via the KIP-714 broker-plugin. For more detailed information, refer to KIP-1076 document please.

      KIP-1077 deprecates the ForeachProcessor class. This change is aimed at improving the organization and clarity of the Kafka Streams API by ensuring that internal classes are not exposed in public packages.

      KIP-1078 deprecates the leaking getter methods in the Joined helper class. These methods are deprecated without a replacement for future removal, as they don’t add any value to Kafka Streams users.

      To ensures better encapsulation and organization of configuration documentation within Kafka Streams, KIP-1085 deprecate certain public doc description variables that are only used within the StreamsConfig or TopologyConfig classes. Additionally, the unused variable DUMMY_THREAD_INDEX will also be deprecated.

      Due to the removal of the already deprecated #through method in Kafka Streams, the intermediateTopicsOption of StreamsResetter tool in Apache Kafka is not needed any more and therefore is deprecated (KIP-1087).

      Since string metrics cannot be collected on the broker side (KIP-714), KIP-1091 introduces numeric counterparts to allow proper broker-side metric collection for Kafka Streams applications. These metrics will be available at the INFO recording level, and a thread-level metric with a String value will be available for users leveraging Java Management Extensions (JMX).

      In order to reduce storage overhead and improve API usability, a new method in the Java and Scala APIs that accepts a BiFunction for foreign key extraction is introduced by KIP-1104. KIP-1104 allows foreign key extraction from both the key and value in KTable joins in Apache Kafka. Previously, foreign key joins in KTables only allowed extraction from the value, which led to data duplication and potential inconsistencies. This enhancement introduces a new method in the Java and Scala APIs that accepts a BiFunction for foreign key extraction, enabling more intuitive and efficient joins. The existing methods will be deprecated but not removed, ensuring backward compatibility. This change aims to reduce storage overhead and improve API usability.

      With introduction of KIP-1106, the existing Topology.AutoOffsetReset is deprecated and replaced with a new class org.apache.kafka.streams.AutoOffsetReset to capture the reset strategies. New methods will be added to the org.apache.kafka.streams.Topology and org.apache.kafka.streams.kstream.Consumed classes to support the new reset strategy. These changes aim to provide more flexibility and efficiency in managing offsets, especially in scenarios involving long-term storage and infinite retention.

      You can now configure your topology with a ProcessorWrapper, which allows you to access and optionally wrap/replace any processor in the topology by injecting an alternative ProcessorSupplier in its place. This can be used to peek records and access the processor context even for DSL operators, for example to implement a logging or tracing framework, or to aid in testing or debugging scenarios. You must implement the ProcessorWrapper interface and then pass the class or class name into the configs via the new StreamsConfig#PROCESSOR_WRAPPER_CLASS_CONFIG config. NOTE: this config is applied during the topology building phase, and therefore will not take effect unless the config is passed in when creating the StreamsBuilder (DSL) or Topology(PAPI) objects. You MUST use the StreamsBuilder/Topology constructor overload that accepts a TopologyConfig parameter for the StreamsConfig#PROCESSOR_WRAPPER_CLASS_CONFIG to be picked up. See KIP-1112 for more details.

      Upgraded RocksDB dependency to version 9.7.3 (from 7.9.2). This upgrade incorporates various improvements and optimizations within RocksDB. However, it also introduces some API changes. The org.rocksdb.AccessHint class, along with its associated methods, has been removed. Several methods related to compressed block cache configuration in the BlockBasedTableConfig class have been removed, including blockCacheCompressedNumShardBits, blockCacheCompressedSize, and their corresponding setters. These functionalities are now consolidated under the cache option, and developers should configure their compressed block cache using the setCache method instead. The NO_FILE_CLOSES field has been removed from the org.rocksdb.TickerTypeenum as a result the number-open-files metrics does not work as expected. Metric number-open-files returns constant -1 from now on until it will officially be removed. The org.rocksdb.Options.setLogger() method now accepts a LoggerInterface as a parameter instead of the previous Logger. Some data types used in RocksDB’s Java API have been modified. These changes, along with the removed class, field, and new methods, are primarily relevant to users implementing custom RocksDB configurations. These changes are expected to be largely transparent to most Kafka Streams users. However, those employing advanced RocksDB customizations within their Streams applications, particularly through the rocksdb.config.setter, are advised to consult the detailed RocksDB 9.7.3 changelog to ensure a smooth transition and adapt their configurations as needed. Specifically, users leveraging the removed AccessHint class, the removed methods from the BlockBasedTableConfig class, the NO_FILE_CLOSES field from TickerType, or relying on the previous signature of setLogger() will need to update their implementations.

      Streams API changes in 3.9.0

      The introduction of KIP-1033 enables you to provide a processing exception handler to manage exceptions during the processing of a record rather than throwing the exception all the way out of your streams application. You can provide the configs via the StreamsConfig as StreamsConfig#PROCESSING_EXCEPTION_HANDLER_CLASS_CONFIG. The specified handler must implement the org.apache.kafka.streams.errors.ProcessingExceptionHandler interface.

      Kafka Streams now allows to customize the logging interval of stream-thread runtime summary, via the newly added config log.summary.interval.ms. By default, the summary is logged every 2 minutes. More details can be found in KIP-1049.

      Streams API changes in 3.8.0

      Kafka Streams now supports customizable task assignment strategies via the task.assignor.class configuration. The configuration can be set to the fully qualified class name of a custom task assignor implementation that has to extend the new org.apache.kafka.streams.processor.assignment.TaskAssignor interface. The new configuration also allows users to bring back the behavior of the old task assignor StickyTaskAssignor that was used before the introduction of the HighAvailabilityTaskAssignor. If no custom task assignor is configured, the default task assignor HighAvailabilityTaskAssignor is used. If you were using the internal.task.assignor.class config, you should switch to using the new task.assignor.class config instead, as the internal config will be removed in a future release. If you were previously plugging in the StickyTaskAssignor via the legacy internal.task.assignor.class config, you will need to make sure that you are importing the new org.apache.kafka.streams.processor.assignment.StickTaskAssignor when you switch over to the new task.assignor.class config, which is a version of the StickyTaskAssignor that implements the new public TaskAssignor interface. For more details, see the public interface section of KIP-924.

      The Processor API now support so-called read-only state stores, added via KIP-813. These stores don’t have a dedicated changelog topic, but use their source topic for fault-tolerance, similar to KTables with source-topic optimization enabled.

      To improve detection of leaked state store iterators, we added new store-level metrics to track the number and age of open iterators. The new metrics are num-open-iterators, iterator-duration-avg, iterator-duration-max and oldest-iterator-open-since-ms. These metrics are available for all state stores, including RocksDB, in-memory, and custom stores. More details can be found in KIP-989.

      Streams API changes in 3.7.0

      We added a new method to KafkaStreams, namely KafkaStreams#setStandbyUpdateListener() in KIP-988, in which users can provide their customized implementation of the newly added StandbyUpdateListener interface to continuously monitor changes to standby tasks.

      IQv2 supports RangeQuery that allows to specify unbounded, bounded, or half-open key-ranges, which return data in unordered (byte[]-lexicographical) order (per partition). KIP-985 extends this functionality by adding .withDescendingKeys() and .withAscendingKeys()to allow user to receive data in descending or ascending order.

      KIP-992 adds two new query types, namely TimestampedKeyQuery and TimestampedRangeQuery. Both should be used to query a timestamped key-value store, to retrieve a ValueAndTimestamp result. The existing KeyQuery and RangeQuery are changed to always return the value only for timestamped key-value stores.

      IQv2 adds support for MultiVersionedKeyQuery (introduced in KIP-968) that allows retrieving a set of records from a versioned state store for a given key and a specified time range. Users have to use fromTime(Instant) and/or toTime(Instant) to specify a half or a complete time range.

      IQv2 adds support for VersionedKeyQuery (introduced in KIP-960) that allows retrieving a single record from a versioned state store based on its key and timestamp. Users have to use the asOf(Instant) method to define a query that returns the record’s version for the specified timestamp. To be more precise, the key query returns the record with the greatest timestamp <= Instant.

      The non-null key requirements for Kafka Streams join operators were relaxed as part of KIP-962. The behavior of the following operators changed.

      • left join KStream-KStream: no longer drop left records with null-key and call ValueJoiner with ’null’ for right value.
      • outer join KStream-KStream: no longer drop left/right records with null-key and call ValueJoiner with ’null’ for right/left value.
      • left-foreign-key join KTable-KTable: no longer drop left records with null-foreign-key returned by the ForeignKeyExtractor and call ValueJoiner with ’null’ for right value.
      • left join KStream-KTable: no longer drop left records with null-key and call ValueJoiner with ’null’ for right value.
      • left join KStream-GlobalTable: no longer drop records when KeyValueMapper returns ’null’ and call ValueJoiner with ’null’ for right value.

      Stream-DSL users who want to keep the current behavior can prepend a .filter() operator to the aforementioned operators and filter accordingly. The following snippets illustrate how to keep the old behavior.

                  //left join KStream-KStream
                  leftStream
                  .filter((key, value) -> key != null)
                  .leftJoin(rightStream, (leftValue, rightValue) -> join(leftValue, rightValue), windows);
      
                  //outer join KStream-KStream
                  rightStream
                  .filter((key, value) -> key != null);
                  leftStream
                  .filter((key, value) -> key != null)
                  .outerJoin(rightStream, (leftValue, rightValue) -> join(leftValue, rightValue), windows);
      
                  //left-foreign-key join KTable-KTable
                  Function&ltString;, String> foreignKeyExtractor = leftValue -> ...
                  leftTable
                  .filter((key, value) -> foreignKeyExtractor.apply(value) != null)
                  .leftJoin(rightTable, foreignKeyExtractor, (leftValue, rightValue) -> join(leftValue, rightValue), Named.as("left-foreign-key-table-join"));
      
                  //left join KStream-KTable
                  leftStream
                  .filter((key, value) -> key != null)
                  .leftJoin(kTable, (k, leftValue, rightValue) -> join(leftValue, rightValue));
      
                  //left join KStream-GlobalTable
                  KeyValueMapper&ltString;, String, String> keyValueMapper = (key, value) -> ...;
                  leftStream
                  .filter((key, value) -> keyValueMapper.apply(key,value) != null)
                  .leftJoin(globalTable, keyValueMapper, (leftValue, rightValue) -> join(leftValue, rightValue));
      

      The default.dsl.store config was deprecated in favor of the new dsl.store.suppliers.class config to allow for custom state store implementations to be configured as the default. If you currently specify default.dsl.store=ROCKS_DB or default.dsl.store=IN_MEMORY replace those configurations with dsl.store.suppliers.class=BuiltInDslStoreSuppliers.RocksDBDslStoreSuppliers.class and dsl.stores.suppliers.class=BuiltInDslStoreSuppliers.InMemoryDslStoreSuppliers.class respectively

      A new configuration option balance_subtopology for rack.aware.assignment.strategy was introduced in 3.7 release. For more information, including how it can be enabled and further configured, see the Kafka Streams Developer Guide.

      Streams API changes in 3.6.0

      Rack aware task assignment was introduced in KIP-925. Rack aware task assignment can be enabled for StickyTaskAssignor or HighAvailabilityTaskAssignor to compute task assignments which can minimize cross rack traffic under certain conditions. For more information, including how it can be enabled and further configured, see the Kafka Streams Developer Guide.

      IQv2 supports a RangeQuery that allows to specify unbounded, bounded, or half-open key-ranges. Users have to use withUpperBound(K), withLowerBound(K), or withNoBounds() to specify half-open or unbounded ranges, but cannot use withRange(K lower, K upper) for the same. KIP-941 closes this gap by allowing to pass in null as upper and lower bound (with semantics “no bound”) to simplify the usage of the RangeQuery class.

      KStreams-to-KTable joins now have an option for adding a grace period. The grace period is enabled on the Joined object using with withGracePeriod() method. This change was introduced in KIP-923. To use the grace period option in the Stream-Table join the table must be versioned. For more information, including how it can be enabled and further configured, see the Kafka Streams Developer Guide.

      Streams API changes in 3.5.0

      A new state store type, versioned key-value stores, was introduced in KIP-889 and KIP-914. Rather than storing a single record version (value and timestamp) per key, versioned state stores may store multiple record versions per key. This allows versioned state stores to support timestamped retrieval operations to return the latest record (per key) as of a specified timestamp. For more information, including how to upgrade from a non-versioned key-value store to a versioned store in an existing application, see the Developer Guide. Versioned key-value stores are opt-in only; existing applications will not be affected upon upgrading to 3.5 without explicit code changes.

      In addition to KIP-899, KIP-914 updates DSL processing semantics if a user opts-in to use the new versioned key-value stores. Using the new versioned key-value stores, DSL processing are able to handle out-of-order data better: For example, late record may be dropped and stream-table joins do a timestamped based lookup into the table. Table aggregations and primary/foreign-key table-table joins are also improved. Note: versioned key-value stores are not supported for global-KTable and don’t work with suppress().

      KIP-904 improves the implementation of KTable aggregations. In general, an input KTable update triggers a result refinent for two rows; however, prior to KIP-904, if both refinements happen to the same result row, two independent updates to the same row are applied, resulting in spurious itermediate results. KIP-904 allows us to detect this case, and to only apply a single update avoiding spurious intermediate results.

      Error handling is improved via KIP-399. The existing ProductionExceptionHandler now also covers serialization errors.

      We added a new Serde type Boolean in KIP-907

      KIP-884 adds a new config default.client.supplier that allows to use a custom KafkaClientSupplier without any code changes.

      Streams API changes in 3.4.0

      KIP-770 deprecates config cache.max.bytes.buffering in favor of the newly introduced config statestore.cache.max.bytes. To improve monitoring, two new metrics input-buffer-bytes-total and cache-size-bytes-total were added at the DEBUG level. Note, that the KIP is only partially implemented in the 3.4.0 release, and config input.buffer.max.bytes is not available yet.

      KIP-873 enables you to multicast result records to multiple partition of downstream sink topics and adds functionality for choosing to drop result records without sending. The Integer StreamPartitioner.partition() method is deprecated and replaced by the newly added Optiona≶Set<Integer>>StreamPartitioner.partitions() method, which enables returning a set of partitions to send the record to.

      KIP-862 adds a DSL optimization for stream-stream self-joins. The optimization is enabled via a new option single.store.self.join which can be set via existing config topology.optimization. If enabled, the DSL will use a different join processor implementation that uses a single RocksDB store instead of two, to avoid unnecessary data duplication for the self-join case.

      KIP-865 updates the Kafka Streams application reset tool’s server parameter name to conform to the other Kafka tooling by deprecating the --bootstrap-servers parameter and introducing a new --bootstrap-server parameter in its place.

      Streams API changes in 3.3.0

      Kafka Streams does not send a “leave group” request when an instance is closed. This behavior implies that a rebalance is delayed until max.poll.interval.ms passed. KIP-812 introduces KafkaStreams.close(CloseOptions) overload, which allows forcing an instance to leave the group immediately. Note: Due to internal limitations, CloseOptions only works for static consumer groups at this point (cf. KAFKA-16514 for more details and a fix in some future release).

      KIP-820 adapts the PAPI type-safety improvement of KIP-478 into the DSL. The existing methods KStream.transform, KStream.flatTransform, KStream.transformValues, and KStream.flatTransformValues as well as all overloads of void KStream.process are deprecated in favor of the newly added methods

      • KStream<KOut,VOut> KStream.process(ProcessorSupplier, ...)
      • KStream<K,VOut> KStream.processValues(FixedKeyProcessorSupplier, ...)

      Both new methods have multiple overloads and return a KStream instead of void as the deprecated process() methods did. In addition, FixedKeyProcessor, FixedKeyRecord, FixedKeyProcessorContext, and ContextualFixedKeyProcessor are introduced to guard against disallowed key modification inside processValues(). Furthermore, ProcessingContext is added for a better interface hierarchy.

      Emitting a windowed aggregation result only after a window is closed is currently supported via the suppress() operator. However, suppress() uses an in-memory implementation and does not support RocksDB. To close this gap, KIP-825 introduces “emit strategies”, which are built into the aggregation operator directly to use the already existing RocksDB store. TimeWindowedKStream.emitStrategy(EmitStrategy) and SessionWindowedKStream.emitStrategy(EmitStrategy) allow picking between “emit on window update” (default) and “emit on window close” strategies. Additionally, a few new emit metrics are added, as well as a necessary new method, SessionStore.findSessions(long, long).

      KIP-834 allows pausing and resuming a Kafka Streams instance. Pausing implies that processing input records and executing punctuations will be skipped; Kafka Streams will continue to poll to maintain its group membership and may commit offsets. In addition to the new methods KafkaStreams.pause() and KafkaStreams.resume(), it is also supported to check if an instance is paused via the KafkaStreams.isPaused() method.

      To improve monitoring of Kafka Streams applications, KIP-846 adds four new metrics bytes-consumed-total, records-consumed-total, bytes-produced-total, and records-produced-total within a new topic level scope. The metrics are collected at INFO level for source and sink nodes, respectively.

      Streams API changes in 3.2.0

      RocksDB offers many metrics which are critical to monitor and tune its performance. Kafka Streams started to make RocksDB metrics accessible like any other Kafka metric via KIP-471 in 2.4.0 release. However, the KIP was only partially implemented, and is now completed with the 3.2.0 release. For a full list of available RocksDB metrics, please consult the monitoring documentation.

      Kafka Streams ships with RocksDB and in-memory store implementations and users can pick which one to use. However, for the DSL, the choice is a per-operator one, making it cumbersome to switch from the default RocksDB store to in-memory store for all operators, especially for larger topologies. KIP-591 adds a new config default.dsl.store that enables setting the default store for all DSL operators globally. Note that it is required to pass TopologyConfig to the StreamsBuilder constructor to make use of this new config.

      For multi-AZ deployments, it is desired to assign StandbyTasks to a KafkaStreams instance running in a different AZ than the corresponding active StreamTask. KIP-708 enables configuring Kafka Streams instances with a rack-aware StandbyTask assignment strategy, by using the new added configs rack.aware.assignment.tags and corresponding client.tag.<myTag>.

      KIP-791 adds a new method Optional<RecordMetadata> StateStoreContext.recordMetadata() to expose record metadata. This helps for example to provide read-your-writes consistency guarantees in interactive queries.

      Interactive Queries allow users to tap into the operational state of Kafka Streams processor nodes. The existing API is tightly coupled with the actual state store interfaces and thus the internal implementation of state store. To break up this tight coupling and allow for building more advanced IQ features, KIP-796 introduces a completely new IQv2 API, via StateQueryRequest and StateQueryResult classes, as well as Query and QueryResult interfaces (plus additional helper classes). In addition, multiple built-in query types were added: KeyQuery for key lookups and RangeQuery (via KIP-805) for key-range queries on key-value stores, as well as WindowKeyQuery and WindowRangeQuery (via KIP-806) for key and range lookup into windowed stores.

      The Kafka Streams DSL may insert so-called repartition topics for certain DSL operators to ensure correct partitioning of data. These topics are configured with infinite retention time, and Kafka Streams purges old data explicitly via “delete record” requests, when commiting input topic offsets. KIP-811 adds a new config repartition.purge.interval.ms allowing you to configure the purge interval independently of the commit interval.

      Streams API changes in 3.1.0

      The semantics of left/outer stream-stream join got improved via KIP-633. Previously, left-/outer stream-stream join might have emitted so-call spurious left/outer results, due to an eager-emit strategy. The implementation was changed to emit left/outer join result records only after the join window is closed. The old API to specify the join window, i.e., JoinWindows.of() that enables the eager-emit strategy, was deprecated in favor of a JoinWindows.ofTimeDifferenceAndGrace() and JoinWindows.ofTimeDifferencWithNoGrace(). The new semantics are only enabled if you use the new join window builders.
      Additionally, KIP-633 makes setting a grace period also mandatory for windowed aggregations, i.e., for TimeWindows (hopping/tumbling), SessionWindows, and SlidingWindows. The corresponding builder methods .of(...) were deprecated in favor of the new .ofTimeDifferenceAndGrace() and .ofTimeDifferencWithNoGrace() methods.

      KIP-761 adds new metrics that allow to track blocking times on the underlying consumer and producer clients. Check out the section on Kafka Streams metrics for more details.

      Interactive Queries were improved via KIP-763 KIP-766. Range queries now accept null as lower/upper key-range bound to indicate an open-ended lower/upper bound.

      Foreign-key table-table joins now support custom partitioners via KIP-775. Previously, if an input table was partitioned by a non-default partitioner, joining records might fail. With KIP-775 you now can pass a custom StreamPartitioner into the join using the newly added TableJoined object.

      Streams API changes in 3.0.0

      We improved the semantics of task idling (max.task.idle.ms). Now Streams provides stronger in-order join and merge processing semantics. Streams’s new default pauses processing on tasks with multiple input partitions when one of the partitions has no data buffered locally but has a non-zero lag. In other words, Streams will wait to fetch records that are already available on the broker. This results in improved join semantics, since it allows Streams to interleave the two input partitions in timestamp order instead of just processing whichever partition happens to be buffered. There is an option to disable this new behavior, and there is also an option to make Streams wait even longer for new records to be produced to the input partitions, which you can use to get stronger time semantics when you know some of your producers may be slow. See the config reference for more information, and KIP-695 for the larger context of this change.

      Interactive Queries may throw new exceptions for different errors:

      • UnknownStateStoreException: If the specified store name does not exist in the topology, an UnknownStateStoreException will be thrown instead of the former InvalidStateStoreException.
      • StreamsNotStartedException: If Streams state is CREATED, a StreamsNotStartedException will be thrown.
      • InvalidStateStorePartitionException: If the specified partition does not exist, a InvalidStateStorePartitionException will be thrown.

      See KIP-216 for more information.

      We deprecated the StreamsConfig processing.guarantee configuration value "exactly_once" (for EOS version 1) in favor of the improved EOS version 2, formerly configured via "exactly_once_beta. To avoid confusion about the term “beta” in the config name and highlight the production-readiness of EOS version 2, we have also renamed “eos-beta” to “eos-v2” and deprecated the configuration value "exactly_once_beta", replacing it with a new configuration value "exactly_once_v2" Users of exactly-once semantics should plan to migrate to the eos-v2 config and prepare for the removal of the deprecated configs in 4.0 or after at least a year from the release of 3.0, whichever comes last. Note that eos-v2 requires broker version 2.5 or higher, like eos-beta, so users should begin to upgrade their kafka cluster if necessary. See KIP-732 for more details.

      We removed the default implementation of RocksDBConfigSetter#close().

      We dropped the default 24 hours grace period for windowed operations such as Window or Session aggregates, or stream-stream joins. This period determines how long after a window ends any out-of-order records will still be processed. Records coming in after the grace period has elapsed are considered late and will be dropped. But in operators such as suppression, a large grace period has the drawback of incurring an equally large output latency. The current API made it all too easy to miss the grace period config completely, leading you to wonder why your application seems to produce no output – it actually is, but not for 24 hours.

      To prevent accidentally or unknowingly falling back to the default 24hr grace period, we deprecated all of the existing static constructors for the Windows classes (such as TimeWindows#of). These are replaced by new static constructors of two flavors: #ofSizeAndGrace and #ofSizeWithNoGrace (these are for the TimeWindows class; analogous APIs exist for the JoinWindows, SessionWindows, and SlidingWindows classes). With these new APIs you are forced to set the grace period explicitly, or else consciously choose to opt out by selecting the WithNoGrace flavor which sets it to 0 for situations where you really don’t care about the grace period, for example during testing or when playing around with Kafka Streams for the first time. Note that using the new APIs for the JoinWindows class will also enable a fix for spurious left/outer join results, as described in the following paragraph. For more details on the grace period and new static constructors, see KIP-633

      Additionally, in older versions Kafka Streams emitted stream-stream left/outer join results eagerly. This behavior may lead to spurious left/outer join result records. In this release, we changed the behavior to avoid spurious results and left/outer join result are only emitted after the join window is closed, i.e., after the grace period elapsed. To maintain backward compatibility, the old API JoinWindows#of(timeDifference) preserves the old eager-emit behavior and only the new APIs JoinWindows#ofTimeDifferenceAndGrace() and JoinsWindows#ofTimeDifferenceNoGrace enable the new behavior. Check out KAFKA-10847 for more information.

      The public topicGroupId and partition fields on TaskId have been deprecated and replaced with getters. Please migrate to using the new TaskId.subtopology() (which replaces topicGroupId) and TaskId.partition() APIs instead. Also, the TaskId#readFrom and TaskId#writeTo methods have been deprecated and will be removed, as they were never intended for public use. We have also deprecated the org.apache.kafka.streams.processor.TaskMetadata class and introduced a new interface org.apache.kafka.streams.TaskMetadata to be used instead. This change was introduced to better reflect the fact that TaskMetadata was not meant to be instantiated outside of Kafka codebase. Please note that the new TaskMetadata offers APIs that better represent the task id as an actual TaskId object instead of a String. Please migrate to the new org.apache.kafka.streams.TaskMetadata which offers these better methods, for example, by using the new ThreadMetadata#activeTasks and ThreadMetadata#standbyTasks. org.apache.kafka.streams.processor.ThreadMetadata class is also now deprecated and the newly introduced interface org.apache.kafka.streams.ThreadMetadata is to be used instead. In this new ThreadMetadata interface, any reference to the deprecated TaskMetadata is replaced by the new interface. Finally, also org.apache.kafka.streams.state.StreamsMetadata has been deprecated. Please migrate to the new org.apache.kafka.streams.StreamsMetadata. We have deprecated several methods under org.apache.kafka.streams.KafkaStreams that returned the aforementioned deprecated classes:

      • Users of KafkaStreams#allMetadata are meant to migrate to the new KafkaStreams#metadataForAllStreamsClients.
      • Users of KafkaStreams#allMetadataForStore(String) are meant to migrate to the new KafkaStreams#streamsMetadataForStore(String).
      • Users of KafkaStreams#localThreadsMetadata are meant to migrate to the new KafkaStreams#metadataForLocalThreads.

      See KIP-740 and KIP-744 for more details.

      We removed the following deprecated APIs:

      • --zookeeper flag of the application reset tool: deprecated in Kafka 1.0.0 (KIP-198).
      • --execute flag of the application reset tool: deprecated in Kafka 1.1.0 (KIP-171).
      • StreamsBuilder#addGlobalStore (one overload): deprecated in Kafka 1.1.0 (KIP-233).
      • ProcessorContext#forward (some overloads): deprecated in Kafka 2.0.0 (KIP-251).
      • WindowBytesStoreSupplier#segments: deprecated in Kafka 2.1.0 (KIP-319).
      • segments, until, maintainMs on TimeWindows, JoinWindows, and SessionWindows: deprecated in Kafka 2.1.0 (KIP-328).
      • Overloaded JoinWindows#of, before, after, SessionWindows#with, TimeWindows#of, advanceBy, UnlimitedWindows#startOn and KafkaStreams#close with long typed parameters: deprecated in Kafka 2.1.0 (KIP-358).
      • Overloaded KStream#groupBy, groupByKey and KTable#groupBy with Serialized parameter: deprecated in Kafka 2.1.0 (KIP-372).
      • Joined#named, name: deprecated in Kafka 2.3.0 (KIP-307).
      • TopologyTestDriver#pipeInput, readOutput, OutputVerifier and ConsumerRecordFactory classes (KIP-470).
      • KafkaClientSupplier#getAdminClient: deprecated in Kafka 2.4.0 (KIP-476).
      • Overloaded KStream#join, leftJoin, outerJoin with KStream and Joined parameters: deprecated in Kafka 2.4.0 (KIP-479).
      • WindowStore#put(K key, V value): deprecated in Kafka 2.4.0 (KIP-474).
      • UsePreviousTimeOnInvalidTimestamp: deprecated in Kafka 2.5.0 as renamed to UsePartitionTimeOnInvalidTimestamp (KIP-530).
      • Overloaded KafkaStreams#metadataForKey: deprecated in Kafka 2.5.0 (KIP-535).
      • Overloaded KafkaStreams#store: deprecated in Kafka 2.5.0 (KIP-562).

      The following dependencies were removed from Kafka Streams:

      • Connect-json: As of Kafka Streams no longer has a compile time dependency on “connect:json” module (KAFKA-5146). Projects that were relying on this transitive dependency will have to explicitly declare it.

      The default value for configuration parameter replication.factor was changed to -1 (meaning: use broker default replication factor). The replication.factor value of -1 requires broker version 2.4 or newer.

      The new serde type was introduced ListSerde:

      • Added class ListSerde to (de)serialize List-based objects
      • Introduced ListSerializer and ListDeserializer to power the new functionality

      Streams API changes in 2.8.0

      We extended StreamJoined to include the options withLoggingEnabled() and withLoggingDisabled() in KIP-689.

      We added two new methods to KafkaStreams, namely KafkaStreams#addStreamThread() and KafkaStreams#removeStreamThread() in KIP-663. These methods have enabled adding and removing StreamThreads to a running KafkaStreams client.

      We deprecated KafkaStreams#setUncaughtExceptionHandler(final Thread.UncaughtExceptionHandler uncaughtExceptionHandler) in favor of KafkaStreams#setUncaughtExceptionHandler(final StreamsUncaughtExceptionHandler streamsUncaughtExceptionHandler) in KIP-671. The default handler will close the Kafka Streams client and the client will transit to state ERROR. If you implement a custom handler, the new interface allows you to return a StreamThreadExceptionResponse, which will determine how the application will respond to a stream thread failure.

      Changes in KIP-663 necessitated the KafkaStreams client state machine to update, which was done in KIP-696. The ERROR state is now terminal with PENDING_ERROR being a transitional state where the resources are closing. The ERROR state indicates that there is something wrong and the Kafka Streams client should not be blindly restarted without classifying the error that caused the thread to fail. If the error is of a type that you would like to retry, you should have the StreamsUncaughtExceptionHandler return REPLACE_THREAD. When all stream threads are dead there is no automatic transition to ERROR as a new stream thread can be added.

      The TimeWindowedDeserializer constructor TimeWindowedDeserializer(final Deserializer inner) was deprecated to encourage users to properly set their window size through TimeWindowedDeserializer(final Deserializer inner, Long windowSize). An additional streams config, window.size.ms, was added for users that cannot set the window size through the constructor, such as when using the console consumer. KIP-659 has more details.

      To simplify testing, two new constructors that don’t require a Properties parameter have been added to the TopologyTestDriver class. If Properties are passed into the constructor, it is no longer required to set mandatory configuration parameters (cf. KIP-680).

      We added the prefixScan() method to interface ReadOnlyKeyValueStore. The new prefixScan() allows fetching all values whose keys start with a given prefix. See KIP-614 for more details.

      Kafka Streams is now handling TimeoutException thrown by the consumer, producer, and admin client. If a timeout occurs on a task, Kafka Streams moves to the next task and retries to make progress on the failed task in the next iteration. To bound how long Kafka Streams retries a task, you can set task.timeout.ms (default is 5 minutes). If a task does not make progress within the specified task timeout, which is tracked on a per-task basis, Kafka Streams throws a TimeoutException (cf. KIP-572).

      We changed the default value of default.key.serde and default.value.serde to be null instead of ByteArraySerde. Users will now see a ConfigException if their serdes are not correctly configured through those configs or passed in explicitly. See KIP-741 for more details.

      Streams API changes in 2.7.0

      In KeyQueryMetadata we deprecated getActiveHost(), getStandbyHosts() as well as getPartition() and replaced them with activeHost(), standbyHosts() and partition() respectively. KeyQueryMetadata was introduced in Kafka Streams 2.5 release with getter methods having prefix get. The intend of this change is to bring the method names to Kafka custom to not use the get prefix for getter methods. The old methods are deprecated and is not effected. (Cf. KIP-648.)

      The StreamsConfig variable for configuration parameter "topology.optimization" is renamed from TOPOLOGY_OPTIMIZATION to TOPOLOGY_OPTIMIZATION_CONFIG. The old variable is deprecated. Note, that the parameter name itself is not affected. (Cf. KIP-629.)

      The configuration parameter retries is deprecated in favor of the new parameter task.timeout.ms. Kafka Streams’ runtime ignores retries if set, however, it would still forward the parameter to its internal clients.

      We added SlidingWindows as an option for windowedBy() windowed aggregations as described in KIP-450. Sliding windows are fixed-time and data-aligned windows that allow for flexible and efficient windowed aggregations.

      The end-to-end latency metrics introduced in 2.6 have been expanded to include store-level metrics. The new store-level metrics are recorded at the TRACE level, a new metrics recording level. Enabling TRACE level metrics will automatically turn on all higher levels, ie INFO and DEBUG. See KIP-613 for more information.

      Streams API changes in 2.6.0

      We added a new processing mode, EOS version 2, that improves application scalability using exactly-once guarantees (via KIP-447). You can enable this new feature by setting the configuration parameter processing.guarantee to the new value "exactly_once_beta". Note that you need brokers with version 2.5 or newer to use this feature.

      For more highly available stateful applications, we’ve modified the task assignment algorithm to delay the movement of stateful active tasks to instances that aren’t yet caught up with that task’s state. Instead, to migrate a task from one instance to another (eg when scaling out), Streams will assign a warmup replica to the target instance so it can begin restoring the state while the active task stays available on an instance that already had the task. The instances warming up tasks will communicate their progress to the group so that, once ready, Streams can move active tasks to their new owners in the background. Check out KIP-441 for full details, including several new configs for control over this new feature.

      New end-to-end latency metrics have been added. These task-level metrics will be logged at the INFO level and report the min and max end-to-end latency of a record at the beginning/source node(s) and end/terminal node(s) of a task. See KIP-613 for more information.

      As of 2.6.0 Kafka Streams deprecates KStream.through() in favor of the new KStream.repartition() operator (as per KIP-221). KStream.repartition() is similar to KStream.through(), however Kafka Streams will manage the topic for you. If you need to write into and read back from a topic that you mange, you can fall back to use KStream.to() in combination with StreamsBuilder#stream(). Please refer to the developer guide for more details about KStream.repartition().

      The usability of StateStores within the Processor API is improved: ProcessorSupplier and TransformerSupplier now extend ConnectedStoreProvider as per KIP-401, enabling a user to provide StateStores with alongside Processor/Transformer logic so that they are automatically added and connected to the processor.

      We added a --force option in StreamsResetter to force remove left-over members on broker side when long session time out was configured as per KIP-571.

      We added Suppressed.withLoggingDisabled() and Suppressed.withLoggingEnabled(config) methods to allow disabling or configuring of the changelog topic and allows for configuration of the changelog topic as per KIP-446.

      Streams API changes in 2.5.0

      We add a new cogroup() operator (via KIP-150) that allows to aggregate multiple streams in a single operation. Cogrouped streams can also be windowed before they are aggregated. Please refer to the developer guide for more details.

      We added a new KStream.toTable() API to translate an input event stream into a changelog stream as per KIP-523.

      We added a new Serde type Void in KIP-527 to represent null keys or null values from input topic.

      Deprecated UsePreviousTimeOnInvalidTimestamp and replaced it with UsePartitionTimeOnInvalidTimeStamp as per KIP-530.

      Deprecated KafkaStreams.store(String, QueryableStoreType) and replaced it with KafkaStreams.store(StoreQueryParameters) to allow querying for a store with variety of parameters, including querying a specific task and stale stores, as per KIP-562 and KIP-535 respectively.

      Streams API changes in 2.4.0

      As of 2.4.0 Kafka Streams offers a KTable-KTable foreign-key join (as per KIP-213). This joiner allows for records to be joined between two KTables with different keys. Both INNER and LEFT foreign-key joins are supported.

      In the 2.4 release, you now can name all operators in a Kafka Streams DSL topology via KIP-307. Giving your operators meaningful names makes it easier to understand the topology description (Topology#describe()#toString()) and understand the full context of what your Kafka Streams application is doing.
      There are new overloads on most KStream and KTable methods that accept a Named object. Typically you’ll provide a name for the DSL operation by using Named.as("my operator name"). Naming of repartition topics for aggregation operations will still use Grouped and join operations will use either Joined or the new StreamJoined object.

      Before the 2.4.0 version of Kafka Streams, users of the DSL could not name the state stores involved in a stream-stream join. If users changed their topology and added a operator before the join, the internal names of the state stores would shift, requiring an application reset when redeploying. In the 2.4.0 release, Kafka Streams adds the StreamJoined class, which gives users the ability to name the join processor, repartition topic(s) (if a repartition is required), and the state stores involved in the join. Also, by naming the state stores, the changelog topics backing the state stores are named as well. It’s important to note that naming the stores will not make them queryable via Interactive Queries.
      Another feature delivered by StreamJoined is that you can now configure the type of state store used in the join. You can elect to use in-memory stores or custom state stores for a stream-stream join. Note that the provided stores will not be available for querying via Interactive Queries. With the addition of StreamJoined, stream-stream join operations using Joined have been deprecated. Please switch over to stream-stream join methods using the new overloaded methods. You can get more details from KIP-479.

      With the introduction of incremental cooperative rebalancing, Streams no longer requires all tasks be revoked at the beginning of a rebalance. Instead, at the completion of the rebalance only those tasks which are to be migrated to another consumer for overall load balance will need to be closed and revoked. This changes the semantics of the StateListener a bit, as it will not necessarily transition to REBALANCING at the beginning of a rebalance anymore. Note that this means IQ will now be available at all times except during state restoration, including while a rebalance is in progress. If restoration is occurring when a rebalance begins, we will continue to actively restore the state stores and/or process standby tasks during a cooperative rebalance. Note that with this new rebalancing protocol, you may sometimes see a rebalance be followed by a second short rebalance that ensures all tasks are safely distributed. For details on please see KIP-429.

      The 2.4.0 release contains newly added and reworked metrics. KIP-444 adds new client level (i.e., KafkaStreams instance level) metrics to the existing thread-level, task-level, and processor-/state-store-level metrics. For a full list of available client level metrics, see the KafkaStreams monitoring section in the operations guide.
      Furthermore, RocksDB metrics are exposed via KIP-471. For a full list of available RocksDB metrics, see the RocksDB monitoring section in the operations guide.

      Kafka Streams test-utils got improved via KIP-470 to simplify the process of using TopologyTestDriver to test your application code. We deprecated ConsumerRecordFactory, TopologyTestDriver#pipeInput(), OutputVerifier, as well as TopologyTestDriver#readOutput() and replace them with TestInputTopic and TestOutputTopic, respectively. We also introduced a new class TestRecord that simplifies assertion code. For full details see the Testing section in the developer guide.

      In 2.4.0, we deprecated WindowStore#put(K key, V value) that should never be used. Instead the existing WindowStore#put(K key, V value, long windowStartTimestamp) should be used (KIP-474).

      Furthermore, the PartitionGrouper interface and its corresponding configuration parameter partition.grouper were deprecated (KIP-528) and will be removed in the next major release (KAFKA-7785. Hence, this feature won’t be supported in the future any longer and you need to updated your code accordingly. If you use a custom PartitionGrouper and stop to use it, the created tasks might change. Hence, you will need to reset your application to upgrade it.

      Streams API changes in 2.3.0

      Version 2.3.0 adds the Suppress operator to the kafka-streams-scala Ktable API.

      As of 2.3.0 Streams now offers an in-memory version of the window (KIP-428) and the session (KIP-445) store, in addition to the persistent ones based on RocksDB. The new public interfaces inMemoryWindowStore() and inMemorySessionStore() are added to Stores and provide the built-in in-memory window or session store.

      As of 2.3.0 we’ve updated how to turn on optimizations. Now to enable optimizations, you need to do two things. First add this line to your properties properties.setProperty(StreamsConfig.TOPOLOGY_OPTIMIZATION_CONFIG, StreamsConfig.OPTIMIZE);, as you have done before. Second, when constructing your KafkaStreams instance, you’ll need to pass your configuration properties when building your topology by using the overloaded StreamsBuilder.build(Properties) method. For example KafkaStreams myStream = new KafkaStreams(streamsBuilder.build(properties), properties).

      In 2.3.0 we have added default implementation to close() and configure() for Serializer, Deserializer and Serde so that they can be implemented by lambda expression. For more details please read KIP-331.

      To improve operator semantics, new store types are added that allow storing an additional timestamp per key-value pair or window. Some DSL operators (for example KTables) are using those new stores. Hence, you can now retrieve the last update timestamp via Interactive Queries if you specify TimestampedKeyValueStoreType or TimestampedWindowStoreType as your QueryableStoreType. While this change is mainly transparent, there are some corner cases that may require code changes: Caution: If you receive an untyped store and use a cast, you might need to update your code to cast to the correct type. Otherwise, you might get an exception similar tojava.lang.ClassCastException: class org.apache.kafka.streams.state.ValueAndTimestamp cannot be cast to class YOUR-VALUE-TYPE upon getting a value from the store. Additionally, TopologyTestDriver#getStateStore() only returns non-built-in stores and throws an exception if a built-in store is accessed. For more details please read KIP-258.

      To improve type safety, a new operator KStream#flatTransformValues is added. For more details please read KIP-313.

      Kafka Streams used to set the configuration parameter max.poll.interval.ms to Integer.MAX_VALUE. This default value is removed and Kafka Streams uses the consumer default value now. For more details please read KIP-442.

      Default configuration for repartition topic was changed: The segment size for index files (segment.index.bytes) is no longer 50MB, but uses the cluster default. Similarly, the configuration segment.ms in no longer 10 minutes, but uses the cluster default configuration. Lastly, the retention period (retention.ms) is changed from Long.MAX_VALUE to -1 (infinite). For more details please read KIP-443.

      To avoid memory leaks, RocksDBConfigSetter has a new close() method that is called on shutdown. Users should implement this method to release any memory used by RocksDB config objects, by closing those objects. For more details please read KIP-453.

      RocksDB dependency was updated to version 5.18.3. The new version allows to specify more RocksDB configurations, including WriteBufferManager which helps to limit RocksDB off-heap memory usage. For more details please read KAFKA-8215.

      Streams API changes in 2.2.0

      We’ve simplified the KafkaStreams#state transition diagram during the starting up phase a bit in 2.2.0: in older versions the state will transit from CREATED to RUNNING, and then to REBALANCING to get the first stream task assignment, and then back to RUNNING; starting in 2.2.0 it will transit from CREATED directly to REBALANCING and then to RUNNING. If you have registered a StateListener that captures state transition events, you may need to adjust your listener implementation accordingly for this simplification (in practice, your listener logic should be very unlikely to be affected at all).

      In WindowedSerdes, we’ve added a new static constructor to return a TimeWindowSerde with configurable window size. This is to help users to construct time window serdes to read directly from a time-windowed store’s changelog. More details can be found in KIP-393.

      In 2.2.0 we have extended a few public interfaces including KafkaStreams to extend AutoCloseable so that they can be used in a try-with-resource statement. For a full list of public interfaces that get impacted please read KIP-376.

      Streams API changes in 2.1.0

      We updated TopologyDescription API to allow for better runtime checking. Users are encouraged to use #topicSet() and #topicPattern() accordingly on TopologyDescription.Source nodes, instead of using #topics(), which has since been deprecated. Similarly, use #topic() and #topicNameExtractor() to get descriptions of TopologyDescription.Sink nodes. For more details, see KIP-321.

      We’ve added a new class Grouped and deprecated Serialized. The intent of adding Grouped is the ability to name repartition topics created when performing aggregation operations. Users can name the potential repartition topic using the Grouped#as() method which takes a String and is used as part of the repartition topic name. The resulting repartition topic name will still follow the pattern of ${application-id}->name<-repartition. The Grouped class is now favored over Serialized in KStream#groupByKey(), KStream#groupBy(), and KTable#groupBy(). Note that Kafka Streams does not automatically create repartition topics for aggregation operations. Additionally, we’ve updated the Joined class with a new method Joined#withName enabling users to name any repartition topics required for performing Stream/Stream or Stream/Table join. For more details repartition topic naming, see KIP-372. As a result we’ve updated the Kafka Streams Scala API and removed the Serialized class in favor of adding Grouped. If you just rely on the implicit Serialized, you just need to recompile; if you pass in Serialized explicitly, sorry you’ll have to make code changes.

      We’ve added a new config named max.task.idle.ms to allow users specify how to handle out-of-order data within a task that may be processing multiple topic-partitions (see Out-of-Order Handling section for more details). The default value is set to 0, to favor minimized latency over synchronization between multiple input streams from topic-partitions. If users would like to wait for longer time when some of the topic-partitions do not have data available to process and hence cannot determine its corresponding stream time, they can override this config to a larger value.

      We’ve added the missing SessionBytesStoreSupplier#retentionPeriod() to be consistent with the WindowBytesStoreSupplier which allows users to get the specified retention period for session-windowed stores. We’ve also added the missing StoreBuilder#withCachingDisabled() to allow users to turn off caching for their customized stores.

      We added a new serde for UUIDs (Serdes.UUIDSerde) that you can use via Serdes.UUID() (cf. KIP-206).

      We updated a list of methods that take long arguments as either timestamp (fix point) or duration (time period) and replaced them with Instant and Duration parameters for improved semantics. Some old methods base on long are deprecated and users are encouraged to update their code.
      In particular, aggregation windows (hopping/tumbling/unlimited time windows and session windows) as well as join windows now take Duration arguments to specify window size, hop, and gap parameters. Also, window sizes and retention times are now specified as Duration type in Stores class. The Window class has new methods #startTime() and #endTime() that return window start/end timestamp as Instant. For interactive queries, there are new #fetch(...) overloads taking Instant arguments. Additionally, punctuations are now registered via ProcessorContext#schedule(Duration interval, ...). For more details, see KIP-358.

      We deprecated KafkaStreams#close(...) and replaced it with KafkaStreams#close(Duration) that accepts a single timeout argument Note: the new #close(Duration) method has improved (but slightly different) semantics. For more details, see KIP-358.

      The newly exposed AdminClient metrics are now available when calling the KafkaStream#metrics() method. For more details on exposing AdminClients metrics see KIP-324

      We deprecated the notion of segments in window stores as those are intended to be an implementation details. Thus, method Windows#segments() and variable Windows#segments were deprecated. If you implement custom windows, you should update your code accordingly. Similarly, WindowBytesStoreSupplier#segments() was deprecated and replaced with WindowBytesStoreSupplier#segmentInterval(). If you implement custom window store, you need to update your code accordingly. Finally, Stores#persistentWindowStore(...) were deprecated and replaced with a new overload that does not allow to specify the number of segments any longer. For more details, see KIP-319 (note: KIP-328 and KIP-358 ‘overlap’ with KIP-319).

      We’ve added an overloaded StreamsBuilder#build method that accepts an instance of java.util.Properties with the intent of using the StreamsConfig#TOPOLOGY_OPTIMIZATION_CONFIG config added in Kafka Streams 2.0. Before 2.1, when building a topology with the DSL, Kafka Streams writes the physical plan as the user makes calls on the DSL. Now by providing a java.util.Properties instance when executing a StreamsBuilder#build call, Kafka Streams can optimize the physical plan of the topology, provided the StreamsConfig#TOPOLOGY_OPTIMIZATION_CONFIG config is set to StreamsConfig#OPTIMIZE. By setting StreamsConfig#OPTIMIZE in addition to the KTable optimization of reusing the source topic as the changelog topic, the topology may be optimized to merge redundant repartition topics into one repartition topic. The original no parameter version of StreamsBuilder#build is still available for those who wish to not optimize their topology. Note that enabling optimization of the topology may require you to do an application reset when redeploying the application. For more details, see KIP-312

      We are introducing static membership towards Kafka Streams user. This feature reduces unnecessary rebalances during normal application upgrades or rolling bounces. For more details on how to use it, checkout static membership design. Note, Kafka Streams uses the same ConsumerConfig#GROUP_INSTANCE_ID_CONFIG, and you only need to make sure it is uniquely defined across different stream instances in one application.

      Streams API changes in 2.0.0

      In 2.0.0 we have added a few new APIs on the ReadOnlyWindowStore interface (for details please read Streams API changes below). If you have customized window store implementations that extends the ReadOnlyWindowStore interface you need to make code changes.

      In addition, if you using Java 8 method references in your Kafka Streams code you might need to update your code to resolve method ambiguities. Hot-swapping the jar-file only might not work for this case. See below a complete list of 2.0.0 API and semantic changes that allow you to advance your application and/or simplify your code base.

      We moved Consumed interface from org.apache.kafka.streams to org.apache.kafka.streams.kstream as it was mistakenly placed in the previous release. If your code has already used it there is a simple one-liner change needed in your import statement.

      We have also removed some public APIs that are deprecated prior to 1.0.x in 2.0.0. See below for a detailed list of removed APIs.

      We have removed the skippedDueToDeserializationError-rate and skippedDueToDeserializationError-total metrics. Deserialization errors, and all other causes of record skipping, are now accounted for in the pre-existing metrics skipped-records-rate and skipped-records-total. When a record is skipped, the event is now logged at WARN level. If these warnings become burdensome, we recommend explicitly filtering out unprocessable records instead of depending on record skipping semantics. For more details, see KIP-274. As of right now, the potential causes of skipped records are:

      • null keys in table sources
      • null keys in table-table inner/left/outer/right joins
      • null keys or values in stream-table joins
      • null keys or values in stream-stream joins
      • null keys or values in aggregations on grouped streams
      • null keys or values in reductions on grouped streams
      • null keys in aggregations on windowed streams
      • null keys in reductions on windowed streams
      • null keys in aggregations on session-windowed streams
      • Errors producing results, when the configured default.production.exception.handler decides to CONTINUE (the default is to FAIL and throw an exception).
      • Errors deserializing records, when the configured default.deserialization.exception.handler decides to CONTINUE (the default is to FAIL and throw an exception). This was the case previously captured in the skippedDueToDeserializationError metrics.
      • Fetched records having a negative timestamp.

      We’ve also fixed the metrics name for time and session windowed store operations in 2.0. As a result, our current built-in stores will have their store types in the metric names as in-memory-state, in-memory-lru-state, rocksdb-state, rocksdb-window-state, and rocksdb-session-state. For example, a RocksDB time windowed store’s put operation metrics would now be kafka.streams:type=stream-rocksdb-window-state-metrics,client-id=([-.\w]+),task-id=([-.\w]+),rocksdb-window-state-id=([-.\w]+). Users need to update their metrics collecting and reporting systems for their time and session windowed stores accordingly. For more details, please read the State Store Metrics section.

      We have added support for methods in ReadOnlyWindowStore which allows for querying a single window’s key-value pair. For users who have customized window store implementations on the above interface, they’d need to update their code to implement the newly added method as well. For more details, see KIP-261.

      We have added public WindowedSerdes to allow users to read from / write to a topic storing windowed table changelogs directly. In addition, in StreamsConfig we have also added default.windowed.key.serde.inner and default.windowed.value.serde.inner to let users specify inner serdes if the default serde classes are windowed serdes. For more details, see KIP-265.

      We’ve added message header support in the Processor API in Kafka 2.0.0. In particular, we have added a new API ProcessorContext#headers() which returns a Headers object that keeps track of the headers of the source topic’s message that is being processed. Through this object, users can manipulate the headers map that is being propagated throughout the processor topology as well. For more details please feel free to read the Developer Guide section.

      We have deprecated constructors of KafkaStreams that take a StreamsConfig as parameter. Please use the other corresponding constructors that accept java.util.Properties instead. For more details, see KIP-245.

      Kafka 2.0.0 allows to manipulate timestamps of output records using the Processor API (KIP-251). To enable this new feature, ProcessorContext#forward(...) was modified. The two existing overloads #forward(Object key, Object value, String childName) and #forward(Object key, Object value, int childIndex) were deprecated and a new overload #forward(Object key, Object value, To to) was added. The new class To allows you to send records to all or specific downstream processors by name and to set the timestamp for the output record. Forwarding based on child index is not supported in the new API any longer.

      We have added support to allow routing records dynamically to Kafka topics. More specifically, in both the lower-level Topology#addSink and higher-level KStream#to APIs, we have added variants that take a TopicNameExtractor instance instead of a specific String typed topic name, such that for each received record from the upstream processor, the library will dynamically determine which Kafka topic to write to based on the record’s key and value, as well as record context. Note that all the Kafka topics that may possibly be used are still considered as user topics and hence required to be pre-created. In addition to that, we have modified the StreamPartitioner interface to add the topic name parameter since the topic name now may not be known beforehand; users who have customized implementations of this interface would need to update their code while upgrading their application to use Kafka Streams 2.0.0.

      KIP-284 changed the retention time for repartition topics by setting its default value to Long.MAX_VALUE. Instead of relying on data retention Kafka Streams uses the new purge data API to delete consumed data from those topics and to keep used storage small now.

      We have modified the ProcessorStateManger#register(...) signature and removed the deprecated loggingEnabled boolean parameter as it is specified in the StoreBuilder. Users who used this function to register their state stores into the processor topology need to simply update their code and remove this parameter from the caller.

      Kafka Streams DSL for Scala is a new Kafka Streams client library available for developers authoring Kafka Streams applications in Scala. It wraps core Kafka Streams DSL types to make it easier to call when interoperating with Scala code. For example, it includes higher order functions as parameters for transformations avoiding the need anonymous classes in Java 7 or experimental SAM type conversions in Scala 2.11, automatic conversion between Java and Scala collection types, a way to implicitly provide Serdes to reduce boilerplate from your application and make it more typesafe, and more! For more information see the Kafka Streams DSL for Scala documentation and KIP-270.

      We have removed these deprecated APIs:

      • KafkaStreams#toString no longer returns the topology and runtime metadata; to get topology metadata users can call Topology#describe() and to get thread runtime metadata users can call KafkaStreams#localThreadsMetadata (they are deprecated since 1.0.0). For detailed guidance on how to update your code please read here
      • TopologyBuilder and KStreamBuilder are removed and replaced by Topology and StreamsBuidler respectively (they are deprecated since 1.0.0). For detailed guidance on how to update your code please read here
      • StateStoreSupplier are removed and replaced with StoreBuilder (they are deprecated since 1.0.0); and the corresponding Stores#create and KStream, KTable, KGroupedStream overloaded functions that use it have also been removed. For detailed guidance on how to update your code please read here
      • KStream, KTable, KGroupedStream overloaded functions that requires serde and other specifications explicitly are removed and replaced with simpler overloaded functions that use Consumed, Produced, Serialized, Materialized, Joined (they are deprecated since 1.0.0). For detailed guidance on how to update your code please read here
      • Processor#punctuate, ValueTransformer#punctuate, ValueTransformer#punctuate and ProcessorContext#schedule(long) are removed and replaced by ProcessorContext#schedule(long, PunctuationType, Punctuator) (they are deprecated in 1.0.0).
      • The second boolean typed parameter “loggingEnabled” in ProcessorContext#register has been removed; users can now use StoreBuilder#withLoggingEnabled, withLoggingDisabled to specify the behavior when they create the state store.
      • KTable#writeAs, print, foreach, to, through are removed, users can call KTable#tostream()#writeAs instead for the same purpose (they are deprecated since 0.11.0.0). For detailed list of removed APIs please read here
      • StreamsConfig#KEY_SERDE_CLASS_CONFIG, VALUE_SERDE_CLASS_CONFIG, TIMESTAMP_EXTRACTOR_CLASS_CONFIG are removed and replaced with StreamsConfig#DEFAULT_KEY_SERDE_CLASS_CONFIG, DEFAULT_VALUE_SERDE_CLASS_CONFIG, DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG respectively (they are deprecated since 0.11.0.0).
      • StreamsConfig#ZOOKEEPER_CONNECT_CONFIG are removed as we do not need ZooKeeper dependency in Streams any more (it is deprecated since 0.10.2.0).

      Streams API changes in 1.1.0

      We have added support for methods in ReadOnlyWindowStore which allows for querying WindowStores without the necessity of providing keys. For users who have customized window store implementations on the above interface, they’d need to update their code to implement the newly added method as well. For more details, see KIP-205.

      There is a new artifact kafka-streams-test-utils providing a TopologyTestDriver, ConsumerRecordFactory, and OutputVerifier class. You can include the new artifact as a regular dependency to your unit tests and use the test driver to test your business logic of your Kafka Streams application. For more details, see KIP-247.

      The introduction of KIP-220 enables you to provide configuration parameters for the embedded admin client created by Kafka Streams, similar to the embedded producer and consumer clients. You can provide the configs via StreamsConfig by adding the configs with the prefix admin. as defined by StreamsConfig#adminClientPrefix(String) to distinguish them from configurations of other clients that share the same config names.

      New method in KTable

      • transformValues methods have been added to KTable. Similar to those on KStream, these methods allow for richer, stateful, value transformation similar to the Processor API.

      New method in GlobalKTable

      • A method has been provided such that it will return the store name associated with the GlobalKTable or null if the store name is non-queryable.

      New methods in KafkaStreams:

      • added overload for the constructor that allows overriding the Time object used for tracking system wall-clock time; this is useful for unit testing your application code.

      New methods in KafkaClientSupplier:

      • added getAdminClient(config) that allows to override an AdminClient used for administrative requests such as internal topic creations, etc.

      New error handling for exceptions during production:

      • added interface ProductionExceptionHandler that allows implementors to decide whether or not Streams should FAIL or CONTINUE when certain exception occur while trying to produce.
      • provided an implementation, DefaultProductionExceptionHandler that always fails, preserving the existing behavior by default.
      • changing which implementation is used can be done by settings default.production.exception.handler to the fully qualified name of a class implementing this interface.

      Changes in StreamsResetter:

      • added options to specify input topics offsets to reset according to KIP-171

      Streams API changes in 1.0.0

      With 1.0 a major API refactoring was accomplished and the new API is cleaner and easier to use. This change includes the five main classes KafkaStreams, KStreamBuilder, KStream, KTable, and TopologyBuilder (and some more others). All changes are fully backward compatible as old API is only deprecated but not removed. We recommend to move to the new API as soon as you can. We will summarize all API changes in the next paragraphs.

      The two main classes to specify a topology via the DSL (KStreamBuilder) or the Processor API (TopologyBuilder) were deprecated and replaced by StreamsBuilder and Topology (both new classes are located in package org.apache.kafka.streams). Note, that StreamsBuilder does not extend Topology, i.e., the class hierarchy is different now. The new classes have basically the same methods as the old ones to build a topology via DSL or Processor API. However, some internal methods that were public in KStreamBuilder and TopologyBuilder but not part of the actual API are not present in the new classes any longer. Furthermore, some overloads were simplified compared to the original classes. See KIP-120 and KIP-182 for full details.

      Changing how a topology is specified also affects KafkaStreams constructors, that now only accept a Topology. Using the DSL builder class StreamsBuilder one can get the constructed Topology via StreamsBuilder#build(). Additionally, a new class org.apache.kafka.streams.TopologyDescription (and some more dependent classes) were added. Those can be used to get a detailed description of the specified topology and can be obtained by calling Topology#describe(). An example using this new API is shown in the quickstart section.

      New methods in KStream:

      • With the introduction of KIP-202 a new method merge() has been created in KStream as the StreamsBuilder class’s StreamsBuilder#merge() has been removed. The method signature was also changed, too: instead of providing multiple KStreams into the method at the once, only a single KStream is accepted.

      New methods in KafkaStreams:

      • retrieve the current runtime information about the local threads via localThreadsMetadata()
      • observe the restoration of all state stores via setGlobalStateRestoreListener(), in which users can provide their customized implementation of the org.apache.kafka.streams.processor.StateRestoreListener interface

      Deprecated / modified methods in KafkaStreams:

      • toString(), toString(final String indent) were previously used to return static and runtime information. They have been deprecated in favor of using the new classes/methods localThreadsMetadata() / ThreadMetadata (returning runtime information) and TopologyDescription / Topology#describe() (returning static information).
      • With the introduction of KIP-182 you should no longer pass in Serde to KStream#print operations. If you can’t rely on using toString to print your keys an values, you should instead you provide a custom KeyValueMapper via the Printed#withKeyValueMapper call.
      • setStateListener() now can only be set before the application start running, i.e. before KafkaStreams.start() is called.

      Deprecated methods in KGroupedStream

      • Windowed aggregations have been deprecated from KGroupedStream and moved to WindowedKStream. You can now perform a windowed aggregation by, for example, using KGroupedStream#windowedBy(Windows)#reduce(Reducer).

      Modified methods in Processor:

      • The Processor API was extended to allow users to schedule punctuate functions either based on data-driven stream time or wall-clock time. As a result, the original ProcessorContext#schedule is deprecated with a new overloaded function that accepts a user customizable Punctuator callback interface, which triggers its punctuate API method periodically based on the PunctuationType. The PunctuationType determines what notion of time is used for the punctuation scheduling: either stream time or wall-clock time (by default, stream time is configured to represent event time via TimestampExtractor). In addition, the punctuate function inside Processor is also deprecated.

      Before this, users could only schedule based on stream time (i.e. PunctuationType.STREAM_TIME) and hence the punctuate function was data-driven only because stream time is determined (and advanced forward) by the timestamps derived from the input data. If there is no data arriving at the processor, the stream time would not advance and hence punctuation will not be triggered. On the other hand, When wall-clock time (i.e. PunctuationType.WALL_CLOCK_TIME) is used, punctuate will be triggered purely based on wall-clock time. So for example if the Punctuator function is scheduled based on PunctuationType.WALL_CLOCK_TIME, if these 60 records were processed within 20 seconds, punctuate would be called 2 times (one time every 10 seconds); if these 60 records were processed within 5 seconds, then no punctuate would be called at all. Users can schedule multiple Punctuator callbacks with different PunctuationTypes within the same processor by simply calling ProcessorContext#schedule multiple times inside processor’s init() method.

      If you are monitoring on task level or processor-node / state store level Streams metrics, please note that the metrics sensor name and hierarchy was changed: The task ids, store names and processor names are no longer in the sensor metrics names, but instead are added as tags of the sensors to achieve consistent metrics hierarchy. As a result you may need to make corresponding code changes on your metrics reporting and monitoring tools when upgrading to 1.0.0. Detailed metrics sensor can be found in the Streams Monitoring section.

      The introduction of KIP-161 enables you to provide a default exception handler for deserialization errors when reading data from Kafka rather than throwing the exception all the way out of your streams application. You can provide the configs via the StreamsConfig as StreamsConfig#DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG. The specified handler must implement the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface.

      The introduction of KIP-173 enables you to provide topic configuration parameters for any topics created by Kafka Streams. This includes repartition and changelog topics. You can provide the configs via the StreamsConfig by adding the configs with the prefix as defined by StreamsConfig#topicPrefix(String). Any properties in the StreamsConfig with the prefix will be applied when creating internal topics. Any configs that aren’t topic configs will be ignored. If you already use StateStoreSupplier or Materialized to provide configs for changelogs, then they will take precedence over those supplied in the config.

      Streams API changes in 0.11.0.0

      Updates in StreamsConfig:

      • new configuration parameter processing.guarantee is added
      • configuration parameter key.serde was deprecated and replaced by default.key.serde
      • configuration parameter value.serde was deprecated and replaced by default.value.serde
      • configuration parameter timestamp.extractor was deprecated and replaced by default.timestamp.extractor
      • method keySerde() was deprecated and replaced by defaultKeySerde()
      • method valueSerde() was deprecated and replaced by defaultValueSerde()
      • new method defaultTimestampExtractor() was added

      New methods in TopologyBuilder:

      • added overloads for addSource() that allow to define a TimestampExtractor per source node
      • added overloads for addGlobalStore() that allow to define a TimestampExtractor per source node associated with the global store

      New methods in KStreamBuilder:

      • added overloads for stream() that allow to define a TimestampExtractor per input stream
      • added overloads for table() that allow to define a TimestampExtractor per input table
      • added overloads for globalKTable() that allow to define a TimestampExtractor per global table

      Deprecated methods in KTable:

      • void foreach(final ForeachAction<? super K, ? super V> action)
      • void print()
      • void print(final String streamName)
      • void print(final Serde<K> keySerde, final Serde<V> valSerde)
      • void print(final Serde<K> keySerde, final Serde<V> valSerde, final String streamName)
      • void writeAsText(final String filePath)
      • void writeAsText(final String filePath, final String streamName)
      • void writeAsText(final String filePath, final Serde<K> keySerde, final Serde<V> valSerde)
      • void writeAsText(final String filePath, final String streamName, final Serde<K> keySerde, final Serde<V> valSerde)

      The above methods have been deprecated in favor of using the Interactive Queries API. If you want to query the current content of the state store backing the KTable, use the following approach:

      • Make a call to KafkaStreams.store(final String storeName, final QueryableStoreType<T> queryableStoreType)
      • Then make a call to ReadOnlyKeyValueStore.all() to iterate over the keys of a KTable.

      If you want to view the changelog stream of the KTable then you could call KTable.toStream().print(Printed.toSysOut).

      Metrics using exactly-once semantics:

      If "exactly_once" processing (EOS version 1) is enabled via the processing.guarantee parameter, internally Streams switches from a producer-per-thread to a producer-per-task runtime model. Using "exactly_once_beta" (EOS version 2) does use a producer-per-thread, so client.id doesn’t change, compared with "at_least_once" for this case). In order to distinguish the different producers, the producer’s client.id additionally encodes the task-ID for this case. Because the producer’s client.id is used to report JMX metrics, it might be required to update tools that receive those metrics.

      Producer’s client.id naming schema:

      • at-least-once (default): [client.Id]-StreamThread-[sequence-number]
      • exactly-once: [client.Id]-StreamThread-[sequence-number]-[taskId]
      • exactly-once-beta: [client.Id]-StreamThread-[sequence-number]

      [client.Id] is either set via Streams configuration parameter client.id or defaults to [application.id]-[processId] ([processId] is a random UUID).

      Notable changes in 0.10.2.1

      Parameter updates in StreamsConfig:

      • The default config values of embedded producer’s retries and consumer’s max.poll.interval.ms have been changed to improve the resiliency of a Kafka Streams application

      Streams API changes in 0.10.2.0

      New methods in KafkaStreams:

      • set a listener to react on application state change via setStateListener(StateListener listener)
      • retrieve the current application state via state()
      • retrieve the global metrics registry via metrics()
      • apply a timeout when closing an application via close(long timeout, TimeUnit timeUnit)
      • specify a custom indent when retrieving Kafka Streams information via toString(String indent)

      Parameter updates in StreamsConfig:

      • parameter zookeeper.connect was deprecated; a Kafka Streams application does no longer interact with ZooKeeper for topic management but uses the new broker admin protocol (cf. KIP-4, Section “Topic Admin Schema”)
      • added many new parameters for metrics, security, and client configurations

      Changes in StreamsMetrics interface:

      • removed methods: addLatencySensor()
      • added methods: addLatencyAndThroughputSensor(), addThroughputSensor(), recordThroughput(), addSensor(), removeSensor()

      New methods in TopologyBuilder:

      • added overloads for addSource() that allow to define a auto.offset.reset policy per source node
      • added methods addGlobalStore() to add global StateStores

      New methods in KStreamBuilder:

      • added overloads for stream() and table() that allow to define a auto.offset.reset policy per input stream/table
      • added method globalKTable() to create a GlobalKTable

      New joins for KStream:

      • added overloads for join() to join with KTable
      • added overloads for join() and leftJoin() to join with GlobalKTable
      • note, join semantics in 0.10.2 were improved and thus you might see different result compared to 0.10.0.x and 0.10.1.x (cf. Kafka Streams Join Semantics in the Apache Kafka wiki)

      Aligned null-key handling for KTable joins:

      • like all other KTable operations, KTable-KTable joins do not throw an exception on null key records anymore, but drop those records silently

      New window type Session Windows :

      • added class SessionWindows to specify session windows
      • added overloads for KGroupedStream methods count(), reduce(), and aggregate() to allow session window aggregations

      Changes to TimestampExtractor:

      • method extract() has a second parameter now
      • new default timestamp extractor class FailOnInvalidTimestamp (it gives the same behavior as old (and removed) default extractor ConsumerRecordTimestampExtractor)
      • new alternative timestamp extractor classes LogAndSkipOnInvalidTimestamp and UsePreviousTimeOnInvalidTimestamps

      Relaxed type constraints of many DSL interfaces, classes, and methods (cf. KIP-100).

      Streams API changes in 0.10.1.0

      Stream grouping and aggregation split into two methods:

      • old: KStream #aggregateByKey(), #reduceByKey(), and #countByKey()
      • new: KStream#groupByKey() plus KGroupedStream #aggregate(), #reduce(), and #count()
      • Example: stream.countByKey() changes to stream.groupByKey().count()

      Auto Repartitioning:

      • a call to through() after a key-changing operator and before an aggregation/join is no longer required
      • Example: stream.selectKey(…).through(…).countByKey() changes to stream.selectKey().groupByKey().count()

      TopologyBuilder:

      • methods #sourceTopics(String applicationId) and #topicGroups(String applicationId) got simplified to #sourceTopics() and #topicGroups()

      DSL: new parameter to specify state store names:

      • The new Interactive Queries feature requires to specify a store name for all source KTables and window aggregation result KTables (previous parameter “operator/window name” is now the storeName)
      • KStreamBuilder#table(String topic) changes to #topic(String topic, String storeName)
      • KTable#through(String topic) changes to #through(String topic, String storeName)
      • KGroupedStream #aggregate(), #reduce(), and #count() require additional parameter “String storeName”
      • Example: stream.countByKey(TimeWindows.of(“windowName”, 1000)) changes to stream.groupByKey().count(TimeWindows.of(1000), “countStoreName”)

      Windowing:

      • Windows are not named anymore: TimeWindows.of(“name”, 1000) changes to TimeWindows.of(1000) (cf. DSL: new parameter to specify state store names)
      • JoinWindows has no default size anymore: JoinWindows.of(“name”).within(1000) changes to JoinWindows.of(1000)

      Streams API broker compatibility

      The following table shows which versions of the Kafka Streams API are compatible with various Kafka broker versions. For Kafka Stream version older than 2.4.x, please check 3.9 upgrade document.

      Kafka Broker (columns)
      Kafka Streams API (rows)2.1.x and
      2.2.x and
      2.3.x and
      2.4.x and
      2.5.x and
      2.6.x and
      2.7.x and
      2.8.x and
      3.0.x and
      3.1.x and
      3.2.x and
      3.3.x and
      3.4.x and
      3.5.x and
      3.6.x and
      3.7.x and
      3.8.x and
      3.9.x4.0.x
      2.4.x and
      2.5.xcompatible
      2.6.x and
      2.7.x and
      2.8.x and
      3.0.x and
      3.1.x and
      3.2.x and
      3.3.x and
      3.4.x and
      3.5.x and
      3.6.x and
      3.7.x and
      3.8.x and
      3.9.x and
      4.0.xcompatible; enabling exactly-once v2 requires broker version 2.5.x or higher

      Previous Next

      9.7 - Streams Developer Guide

      9.7.1 - Writing a Streams Application

      Writing a Streams Application

      Table of Contents

      • Libraries and Maven artifacts
      • Using Kafka Streams within your application code
      • Testing a Streams application

      Any Java or Scala application that makes use of the Kafka Streams library is considered a Kafka Streams application. The computational logic of a Kafka Streams application is defined as a processor topology, which is a graph of stream processors (nodes) and streams (edges).

      You can define the processor topology with the Kafka Streams APIs:

      Kafka Streams DSL A high-level API that provides the most common data transformation operations such as map, filter, join, and aggregations out of the box. The DSL is the recommended starting point for developers new to Kafka Streams, and should cover many use cases and stream processing needs. If you’re writing a Scala application then you can use the Kafka Streams DSL for Scala library which removes much of the Java/Scala interoperability boilerplate as opposed to working directly with the Java DSL. Processor API A low-level API that lets you add and connect processors as well as interact directly with state stores. The Processor API provides you with even more flexibility than the DSL but at the expense of requiring more manual work on the side of the application developer (e.g., more lines of code).

      Libraries and Maven artifacts

      This section lists the Kafka Streams related libraries that are available for writing your Kafka Streams applications.

      You can define dependencies on the following libraries for your Kafka Streams applications.

      Group IDArtifact IDVersionDescription
      org.apache.kafkakafka-streams4.0.0(Required) Base library for Kafka Streams.
      org.apache.kafkakafka-clients4.0.0(Required) Kafka client library. Contains built-in serializers/deserializers.
      org.apache.kafkakafka-streams-scala4.0.0(Optional) Kafka Streams DSL for Scala library to write Scala Kafka Streams applications. When not using SBT you will need to suffix the artifact ID with the correct version of Scala your application is using (_2.12, _2.13)

      Tip

      See the section Data Types and Serialization for more information about Serializers/Deserializers.

      Example pom.xml snippet when using Maven:

      <dependency>
          <groupId>org.apache.kafka</groupId>
          <artifactId>kafka-streams</artifactId>
          <version>4.0.0</version>
      </dependency>
      <dependency>
          <groupId>org.apache.kafka</groupId>
          <artifactId>kafka-clients</artifactId>
          <version>4.0.0</version>
      </dependency>
          <dependency>
          <groupId>org.apache.kafka</groupId>
          <artifactId>kafka-streams-scala_2.13</artifactId>
          <version>4.0.0</version>
      </dependency>
      

      Using Kafka Streams within your application code

      You can call Kafka Streams from anywhere in your application code, but usually these calls are made within the main() method of your application, or some variant thereof. The basic elements of defining a processing topology within your application are described below.

      First, you must create an instance of KafkaStreams.

      • The first argument of the KafkaStreams constructor takes a topology (either StreamsBuilder#build() for the DSL or Topology for the Processor API) that is used to define a topology.
      • The second argument is an instance of java.util.Properties, which defines the configuration for this specific topology.

      Code example:

      import org.apache.kafka.streams.KafkaStreams;
      import org.apache.kafka.streams.kstream.StreamsBuilder;
      import org.apache.kafka.streams.processor.Topology;
      
      // Use the builders to define the actual processing topology, e.g. to specify
      // from which input topics to read, which stream operations (filter, map, etc.)
      // should be called, and so on.  We will cover this in detail in the subsequent
      // sections of this Developer Guide.
      
      StreamsBuilder builder = ...;  // when using the DSL
      Topology topology = builder.build();
      //
      // OR
      //
      Topology topology = ...; // when using the Processor API
      
      // Use the configuration to tell your application where the Kafka cluster is,
      // which Serializers/Deserializers to use by default, to specify security settings,
      // and so on.
      Properties props = ...;
      
      KafkaStreams streams = new KafkaStreams(topology, props);
      

      At this point, internal structures are initialized, but the processing is not started yet. You have to explicitly start the Kafka Streams thread by calling the KafkaStreams#start() method:

      // Start the Kafka Streams threads
      streams.start();
      

      If there are other instances of this stream processing application running elsewhere (e.g., on another machine), Kafka Streams transparently re-assigns tasks from the existing instances to the new instance that you just started. For more information, see Stream Partitions and Tasks and Threading Model.

      To catch any unexpected exceptions, you can set an java.lang.Thread.UncaughtExceptionHandler before you start the application. This handler is called whenever a stream thread is terminated by an unexpected exception:

      streams.setUncaughtExceptionHandler((Thread thread, Throwable throwable) -> {
        // here you should examine the throwable/exception and perform an appropriate action!
      });
      

      To stop the application instance, call the KafkaStreams#close() method:

      // Stop the Kafka Streams threads
      streams.close();
      

      To allow your application to gracefully shutdown in response to SIGTERM, it is recommended that you add a shutdown hook and call KafkaStreams#close.

      Here is a shutdown hook example in Java:

      // Add shutdown hook to stop the Kafka Streams threads.
      // You can optionally provide a timeout to `close`.
      Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
      

      After an application is stopped, Kafka Streams will migrate any tasks that had been running in this instance to available remaining instances.

      Testing a Streams application

      Kafka Streams comes with a test-utils module to help you test your application here.

      Previous Next

      9.7.2 - Configuring a Streams Application

      Configuring a Streams Application

      Kafka and Kafka Streams configuration options must be configured before using Streams. You can configure Kafka Streams by specifying parameters in a java.util.Properties instance.

      1. Create a java.util.Properties instance.

      2. Set the parameters. For example:

        import java.util.Properties; import org.apache.kafka.streams.StreamsConfig;

      Properties settings = new Properties();
      // Set a few key parameters
      settings.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-first-streams-application");
      settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092");
      // Any further settings
      settings.put(... , ...);
      

      Configuration parameter reference

      This section contains the most common Streams configuration parameters. For a full reference, see the Streams Javadocs.

      • Required configuration parameters
        • application.id
        • bootstrap.servers
      • Recommended configuration parameters for resiliency
        • acks
        • replication.factor
        • min.insync.replicas
        • num.standby.replicas
      • Optional configuration parameters
        • acceptable.recovery.lag
        • default.deserialization.exception.handler (deprecated since 4.0)
        • default.key.serde
        • default.production.exception.handler (deprecated since 4.0)
        • default.timestamp.extractor
        • default.value.serde
        • deserialization.exception.handler
        • enable.metrics.push
        • log.summary.interval.ms
        • max.task.idle.ms
        • max.warmup.replicas
        • num.standby.replicas
        • num.stream.threads
        • probing.rebalance.interval.ms
        • processing.exception.handler
        • processing.guarantee
        • processor.wrapper.class
        • production.exception.handler
        • rack.aware.assignment.non_overlap_cost
        • rack.aware.assignment.strategy
        • rack.aware.assignment.tags
        • rack.aware.assignment.traffic_cost
        • replication.factor
        • rocksdb.config.setter
        • state.dir
        • task.assignor.class
        • topology.optimization
        • windowed.inner.class.serde
      • Kafka consumers and producer configuration parameters
        • Naming
        • Default Values
        • Parameters controlled by Kafka Streams
        • enable.auto.commit

      Required configuration parameters

      Here are the required Streams configuration parameters.

      Parameter NameImportanceDescriptionDefault Value
      application.idRequiredAn identifier for the stream processing application. Must be unique within the Kafka cluster.None
      bootstrap.serversRequiredA list of host/port pairs to use for establishing the initial connection to the Kafka cluster.None

      application.id

      (Required) The application ID. Each stream processing application must have a unique ID. The same ID must be given to all instances of the application. It is recommended to use only alphanumeric characters, . (dot), - (hyphen), and _ (underscore). Examples: "hello_world", "hello_world-v1.0.0"

      This ID is used in the following places to isolate resources used by the application from others:

      • As the default Kafka consumer and producer client.id prefix
      • As the Kafka consumer group.id for coordination
      • As the name of the subdirectory in the state directory (cf. state.dir)
      • As the prefix of internal Kafka topic names

      Tip: When an application is updated, the application.id should be changed unless you want to reuse the existing data in internal topics and state stores. For example, you could embed the version information within application.id, as my-app-v1.0.0 and my-app-v1.0.2.

      bootstrap.servers

      (Required) The Kafka bootstrap servers. This is the same setting that is used by the underlying producer and consumer clients to connect to the Kafka cluster. Example: "kafka-broker1:9092,kafka-broker2:9092".

      Recommended configuration parameters for resiliency

      There are several Kafka and Kafka Streams configuration options that need to be configured explicitly for resiliency in face of broker failures:

      Parameter NameCorresponding ClientDefault valueConsider setting to
      acksProducer (for version <=2.8)acks="1")acks="all"
      replication.factor (for broker version 2.3 or older)Streams-13 (broker 2.4+: ensure broker config default.replication.factor=3)
      min.insync.replicasBroker12
      num.standby.replicasStreams01

      Increasing the replication factor to 3 ensures that the internal Kafka Streams topic can tolerate up to 2 broker failures. The tradeoff from moving to the default values to the recommended ones is that some performance and more storage space (3x with the replication factor of 3) are sacrificed for more resiliency.

      acks

      The number of acknowledgments that the leader must have received before considering a request complete. This controls the durability of records that are sent. The possible values are:

      • acks="0" The producer does not wait for acknowledgment from the server and the record is immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the producer won’t generally know of any failures. The offset returned for each record will always be set to -1.
      • acks="1" The leader writes the record to its local log and responds without waiting for full acknowledgement from all followers. If the leader immediately fails after acknowledging the record, but before the followers have replicated it, then the record will be lost.
      • acks="all" (default since 3.0 release) The leader waits for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost if there is at least one in-sync replica alive. This is the strongest available guarantee.

      For more information, see the Kafka Producer documentation.

      replication.factor

      See the description here.

      min.insync.replicas

      The minimum number of in-sync replicas available for replication if the producer is configured with acks="all" (see topic configs).

      num.standby.replicas

      See the description here.

      Properties streamsSettings = new Properties();
      // for broker version 2.3 or older
      //streamsSettings.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 3);
      // for version 2.8 or older
      //streamsSettings.put(StreamsConfig.producerPrefix(ProducerConfig.ACKS_CONFIG), "all");
      streamsSettings.put(StreamsConfig.topicPrefix(TopicConfig.MIN_IN_SYNC_REPLICAS_CONFIG), 2);
      streamsSettings.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, 1);
      

      Optional configuration parameters

      Here are the optional Streams javadocs, sorted by level of importance:

      • High: These are parameters with a default value which is most likely not a good fit for production use. It’s highly recommended to revisit these parameters for production usage.
      • Medium: The default values of these parameters should work for production for many cases, but it’s not uncommon that they are changed, for example to tune performance.
      • Low: It should rarely be necessary to change the value for these parameters. It’s only recommended to change them if there is a very specific issue you want to address.
      Parameter NameImportanceDescriptionDefault Value
      acceptable.recovery.lagMediumThe maximum acceptable lag (number of offsets to catch up) for an instance to be considered caught-up and ready for the active task.10000
      application.serverLowA host:port pair pointing to an embedded user defined endpoint that can be used for discovering the locations of state stores within a single Kafka Streams application. The value of this must be different for each instance of the application.the empty string
      buffered.records.per.partitionLowThe maximum number of records to buffer per partition.1000
      statestore.cache.max.bytesMediumMaximum number of memory bytes to be used for record caches across all threads.10485760
      cache.max.bytes.buffering (Deprecated. Use statestore.cache.max.bytes instead.)MediumMaximum number of memory bytes to be used for record caches across all threads.10485760
      client.idMediumAn ID string to pass to the server when making requests. (This setting is passed to the consumer/producer clients used internally by Kafka Streams.)the empty string
      commit.interval.msLowThe frequency in milliseconds with which to save the position (offsets in source topics) of tasks.30000 (30 seconds)
      default.deserialization.exception.handler (Deprecated. Use deserialization.exception.handler instead.)MediumException handling class that implements the DeserializationExceptionHandler interface.LogAndContinueExceptionHandler
      default.key.serdeMediumDefault serializer/deserializer class for record keys, implements the Serde interface. Must be set by the user or all serdes must be passed in explicitly (see also default.value.serde).null
      default.production.exception.handler (Deprecated. Use production.exception.handler instead.)MediumException handling class that implements the ProductionExceptionHandler interface.DefaultProductionExceptionHandler
      default.timestamp.extractorMediumTimestamp extractor class that implements the TimestampExtractor interface. See Timestamp ExtractorFailOnInvalidTimestamp
      default.value.serdeMediumDefault serializer/deserializer class for record values, implements the Serde interface. Must be set by the user or all serdes must be passed in explicitly (see also default.key.serde).null
      default.dsl.storeLow[DEPRECATED] The default state store type used by DSL operators. Deprecated in favor of dsl.store.suppliers.class"ROCKS_DB"
      deserialization.exception.handlerMediumException handling class that implements the DeserializationExceptionHandler interface.LogAndContinueExceptionHandler
      dsl.store.suppliers.classLowDefines a default state store implementation to be used by any stateful DSL operator that has not explicitly configured the store implementation type. Must implement the org.apache.kafka.streams.state.DslStoreSuppliers interface.BuiltInDslStoreSuppliers.RocksDBDslStoreSuppliers
      log.summary.interval.msLowThe output interval in milliseconds for logging summary information (disabled if negative).120000 (2 minutes)
      enable.metrics.pushLowWhether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client.true
      max.task.idle.msMediumThis config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in milliseconds a stream task will stay idle when it is fully caught up on some (but not all) input partitions to wait for producers to send additional records and avoid potential out-of-order record processing across multiple input streams. The default (zero) does not wait for producers to send more records, but it does wait to fetch data that is already present on the brokers. This default means that for records that are already present on the brokers, Streams will process them in timestamp order. Set to -1 to disable idling entirely and process any locally available data, even though doing so may produce out-of-order processing.0
      max.warmup.replicasMediumThe maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once.2
      metric.reportersLowA list of classes to use as metrics reporters.the empty list
      metrics.num.samplesLowThe number of samples maintained to compute metrics.2
      metrics.recording.levelLowThe highest recording level for metrics.INFO
      metrics.sample.window.msLowThe window of time in milliseconds a metrics sample is computed over.30000 (30 seconds)
      num.standby.replicasHighThe number of standby replicas for each task.0
      num.stream.threadsMediumThe number of threads to execute stream processing.1
      probing.rebalance.interval.msLowThe maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have sufficiently caught up.600000 (10 minutes)
      processing.exception.handlerMediumException handling class that implements the ProcessingExceptionHandler interface.LogAndFailProcessingExceptionHandler
      processing.guaranteeMediumThe processing mode. Can be either "at_least_once" or "exactly_once_v2" (for EOS version 2, requires broker version 2.5+). See Processing Guarantee.."at_least_once"
      processor.wrapper.classMediumA class or class name implementing the ProcessorWrapper interface. Must be passed in when creating the topology, and will not be applied unless passed in to the appropriate constructor as a TopologyConfig. You should use the StreamsBuilder#new(TopologyConfig) constructor for DSL applications, and the Topology#new(TopologyConfig) constructor for PAPI applications.
      production.exception.handlerMediumException handling class that implements the ProductionExceptionHandler interface.DefaultProductionExceptionHandler
      poll.msLowThe amount of time in milliseconds to block waiting for input.100
      rack.aware.assignment.strategyLowThe strategy used for rack aware assignment. Acceptable value are "none" (default), "min_traffic", and "balance_suttopology". See Rack Aware Assignment Strategy."none"
      List of tag keys used to distribute standby replicas across Kafka Streams clients. When configured, Kafka Streams will make a best-effort to distribute the standby tasks over clients with different tag values. See Rack Aware Assignment Tags. the empty list
      rack.aware.assignment.non_overlap_costLowCost associated with moving tasks from existing assignment. See Rack Aware Assignment Non-Overlap-Cost.null
      rack.aware.assignment.non_overlap_costLowCost associated with cross rack traffic. See Rack Aware Assignment Traffic-Cost.null
      replication.factorMediumThe replication factor for changelog topics and repartition topics created by the application. The default of -1 (meaning: use broker default replication factor) requires broker version 2.4 or newer.-1
      retry.backoff.msLowThe amount of time in milliseconds, before a request is retried.100
      rocksdb.config.setterMediumThe RocksDB configuration.null
      state.cleanup.delay.msLowThe amount of time in milliseconds to wait before deleting state when a partition has migrated.600000 (10 minutes)
      state.dirHighDirectory location for state stores./${java.io.tmpdir}/kafka-streams
      task.assignor.classMediumA task assignor class or class name implementing the TaskAssignor interface.The high-availability task assignor.
      task.timeout.msMediumThe maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a timeout of 0 ms, a task would raise an error for the first internal error. For any timeout larger than 0 ms, a task will retry at least once before an error is raised.300000 (5 minutes)
      topology.optimizationMediumA configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: StreamsConfig.NO_OPTIMIZATION (none), StreamsConfig.OPTIMIZE (all) or a comma separated list of specific optimizations: StreamsConfig.REUSE_KTABLE_SOURCE_TOPICS (reuse.ktable.source.topics), StreamsConfig.MERGE_REPARTITION_TOPICS (merge.repartition.topics), StreamsConfig.SINGLE_STORE_SELF_JOIN (single.store.self.join)."NO_OPTIMIZATION"
      upgrade.fromMediumThe version you are upgrading from during a rolling upgrade. See Upgrade Fromnull
      windowstore.changelog.additional.retention.msLowAdded to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift.86400000 (1 day)
      window.size.msLowSets window size for the deserializer in order to calculate window end times.null

      acceptable.recovery.lag

      The maximum acceptable lag (total number of offsets to catch up from the changelog) for an instance to be considered caught-up and able to receive an active task. Streams will only assign stateful active tasks to instances whose state stores are within the acceptable recovery lag, if any exist, and assign warmup replicas to restore state in the background for instances that are not yet caught up. Should correspond to a recovery time of well under a minute for a given workload. Must be at least 0.

      Note: if you set this to Long.MAX_VALUE it effectively disables the warmup replicas and task high availability, allowing Streams to immediately produce a balanced assignment and migrate tasks to a new instance without first warming them up.

      deserialization.exception.handler (deprecated: default.deserialization.exception.handler)

      The deserialization exception handler allows you to manage record exceptions that fail to deserialize. This can be caused by corrupt data, incorrect serialization logic, or unhandled record types. The implemented exception handler needs to return a FAIL or CONTINUE depending on the record and the exception thrown. Returning FAIL will signal that Streams should shut down and CONTINUE will signal that Streams should ignore the issue and continue processing. The following library built-in exception handlers are available:

      • LogAndContinueExceptionHandler: This handler logs the deserialization exception and then signals the processing pipeline to continue processing more records. This log-and-skip strategy allows Kafka Streams to make progress instead of failing if there are records that fail to deserialize.
      • LogAndFailExceptionHandler. This handler logs the deserialization exception and then signals the processing pipeline to stop processing more records.

      You can also provide your own customized exception handler besides the library provided ones to meet your needs. For example, you can choose to forward corrupt records into a quarantine topic (think: a “dead letter queue”) for further processing. To do this, use the Producer API to write a corrupted record directly to the quarantine topic. To be more concrete, you can create a separate KafkaProducer object outside the Streams client, and pass in this object as well as the dead letter queue topic name into the Properties map, which then can be retrieved from the configure function call. The drawback of this approach is that “manual” writes are side effects that are invisible to the Kafka Streams runtime library, so they do not benefit from the end-to-end processing guarantees of the Streams API:

      public class SendToDeadLetterQueueExceptionHandler implements DeserializationExceptionHandler {
          KafkaProducer<byte[], byte[]> dlqProducer;
          String dlqTopic;
      
          @Override
          public DeserializationHandlerResponse handle(final ErrorHandlerContext context,
                                                       final ConsumerRecord<byte[], byte[]> record,
                                                       final Exception exception) {
      
              log.warn("Exception caught during Deserialization, sending to the dead queue topic; " +
                  "taskId: {}, topic: {}, partition: {}, offset: {}",
                  context.taskId(), record.topic(), record.partition(), record.offset(),
                  exception);
      
              dlqProducer.send(new ProducerRecord<>(dlqTopic, record.timestamp(), record.key(), record.value(), record.headers())).get();
      
              return DeserializationHandlerResponse.CONTINUE;
          }
      
          @Override
          public void configure(final Map<String, ?> configs) {
              dlqProducer = .. // get a producer from the configs map
              dlqTopic = .. // get the topic name from the configs map
          }
      }
      

      production.exception.handler (deprecated: default.production.exception.handler)

      The production exception handler allows you to manage exceptions triggered when trying to interact with a broker such as attempting to produce a record that is too large. By default, Kafka provides and uses the DefaultProductionExceptionHandler that always fails when these exceptions occur.

      An exception handler can return FAIL, CONTINUE, or RETRY depending on the record and the exception thrown. Returning FAIL will signal that Streams should shut down. CONTINUE will signal that Streams should ignore the issue and continue processing. For RetriableException the handler may return RETRY to tell the runtime to retry sending the failed record (Note: If RETRY is returned for a non-RetriableException it will be treated as FAIL.) If you want to provide an exception handler that always ignores records that are too large, you could implement something like the following:

      import java.util.Properties;
      import org.apache.kafka.streams.StreamsConfig;
      import org.apache.kafka.common.errors.RecordTooLargeException;
      import org.apache.kafka.streams.errors.ProductionExceptionHandler;
      import org.apache.kafka.streams.errors.ProductionExceptionHandler.ProductionExceptionHandlerResponse;
      
      public class IgnoreRecordTooLargeHandler implements ProductionExceptionHandler {
          public void configure(Map<String, Object> config) {}
      
          public ProductionExceptionHandlerResponse handle(final ErrorHandlerContext context,
                                                           final ProducerRecord<byte[], byte[]> record,
                                                           final Exception exception) {
              if (exception instanceof RecordTooLargeException) {
                  return ProductionExceptionHandlerResponse.CONTINUE;
              } else {
                  return ProductionExceptionHandlerResponse.FAIL;
              }
          }
      }
      
      Properties settings = new Properties();
      
      // other various kafka streams settings, e.g. bootstrap servers, application id, etc
      
      settings.put(StreamsConfig.PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG,
                   IgnoreRecordTooLargeHandler.class);
      

      default.timestamp.extractor

      A timestamp extractor pulls a timestamp from an instance of ConsumerRecord. Timestamps are used to control the progress of streams.

      The default extractor is FailOnInvalidTimestamp. This extractor retrieves built-in timestamps that are automatically embedded into Kafka messages by the Kafka producer client since Kafka version 0.10. Depending on the setting of Kafka’s server-side log.message.timestamp.type broker and message.timestamp.type topic parameters, this extractor provides you with:

      • event-time processing semantics if log.message.timestamp.type is set to CreateTime aka “producer time” (which is the default). This represents the time when a Kafka producer sent the original message. If you use Kafka’s official producer client, the timestamp represents milliseconds since the epoch.
      • ingestion-time processing semantics if log.message.timestamp.type is set to LogAppendTime aka “broker time”. This represents the time when the Kafka broker received the original message, in milliseconds since the epoch.

      The FailOnInvalidTimestamp extractor throws an exception if a record contains an invalid (i.e. negative) built-in timestamp, because Kafka Streams would not process this record but silently drop it. Invalid built-in timestamps can occur for various reasons: if for example, you consume a topic that is written to by pre-0.10 Kafka producer clients or by third-party producer clients that don’t support the new Kafka 0.10 message format yet; another situation where this may happen is after upgrading your Kafka cluster from 0.9 to 0.10, where all the data that was generated with 0.9 does not include the 0.10 message timestamps.

      If you have data with invalid timestamps and want to process it, then there are two alternative extractors available. Both work on built-in timestamps, but handle invalid timestamps differently.

      • LogAndSkipOnInvalidTimestamp: This extractor logs a warn message and returns the invalid timestamp to Kafka Streams, which will not process but silently drop the record. This log-and-skip strategy allows Kafka Streams to make progress instead of failing if there are records with an invalid built-in timestamp in your input data.
      • UsePartitionTimeOnInvalidTimestamp. This extractor returns the record’s built-in timestamp if it is valid (i.e. not negative). If the record does not have a valid built-in timestamps, the extractor returns the previously extracted valid timestamp from a record of the same topic partition as the current record as a timestamp estimation. In case that no timestamp can be estimated, it throws an exception.

      Another built-in extractor is WallclockTimestampExtractor. This extractor does not actually “extract” a timestamp from the consumed record but rather returns the current time in milliseconds from the system clock (think: System.currentTimeMillis()), which effectively means Streams will operate on the basis of the so-called processing-time of events.

      You can also provide your own timestamp extractors, for instance to retrieve timestamps embedded in the payload of messages. If you cannot extract a valid timestamp, you can either throw an exception, return a negative timestamp, or estimate a timestamp. Returning a negative timestamp will result in data loss - the corresponding record will not be processed but silently dropped. If you want to estimate a new timestamp, you can use the value provided via previousTimestamp (i.e., a Kafka Streams timestamp estimation). Here is an example of a custom TimestampExtractor implementation:

      import org.apache.kafka.clients.consumer.ConsumerRecord;
      import org.apache.kafka.streams.processor.TimestampExtractor;
      
      // Extracts the embedded timestamp of a record (giving you "event-time" semantics).
      public class MyEventTimeExtractor implements TimestampExtractor {
      
        @Override
        public long extract(final ConsumerRecord<Object, Object> record, final long previousTimestamp) {
          // `Foo` is your own custom class, which we assume has a method that returns
          // the embedded timestamp (milliseconds since midnight, January 1, 1970 UTC).
          long timestamp = -1;
          final Foo myPojo = (Foo) record.value();
          if (myPojo != null) {
            timestamp = myPojo.getTimestampInMillis();
          }
          if (timestamp < 0) {
            // Invalid timestamp!  Attempt to estimate a new timestamp,
            // otherwise fall back to wall-clock time (processing-time).
            if (previousTimestamp >= 0) {
              return previousTimestamp;
            } else {
              return System.currentTimeMillis();
            }
          }
        }
      
      }
      

      You would then define the custom timestamp extractor in your Streams configuration as follows:

      import java.util.Properties;
      import org.apache.kafka.streams.StreamsConfig;
      
      Properties streamsConfiguration = new Properties();
      streamsConfiguration.put(StreamsConfig.DEFAULT_TIMESTAMP_EXTRACTOR_CLASS_CONFIG, MyEventTimeExtractor.class);
      

      default.key.serde

      The default Serializer/Deserializer class for record keys, null unless set by user. Serialization and deserialization in Kafka Streams happens whenever data needs to be materialized, for example:

      • Whenever data is read from or written to a Kafka topic (e.g., via the StreamsBuilder#stream() and KStream#to() methods).
      • Whenever data is read from or written to a state store.

      This is discussed in more detail in Data types and serialization.

      default.value.serde

      The default Serializer/Deserializer class for record values, null unless set by user. Serialization and deserialization in Kafka Streams happens whenever data needs to be materialized, for example:

      • Whenever data is read from or written to a Kafka topic (e.g., via the StreamsBuilder#stream() and KStream#to() methods).
      • Whenever data is read from or written to a state store.

      This is discussed in more detail in Data types and serialization.

      rack.aware.assignment.non_overlap_cost

      This configuration sets the cost of moving a task from the original assignment computed either by StickyTaskAssignor or HighAvailabilityTaskAssignor. Together with rack.aware.assignment.traffic_cost, they control whether the optimizer favors minimizing cross rack traffic or minimizing the movement of tasks in the existing assignment. If this config is set to a larger value than rack.aware.assignment.traffic_cost, the optimizer will try to maintain the existing assignment computed by the task assignor. Note that the optimizer takes the ratio of these two configs into consideration of favoring maintaining existing assignment or minimizing traffic cost. For example, setting rack.aware.assignment.non_overlap_cost to 10 and rack.aware.assignment.traffic_cost to 1 is more likely to maintain existing assignment than setting rack.aware.assignment.non_overlap_cost to 100 and rack.aware.assignment.traffic_cost to 50.

      The default value is null which means default non_overlap_cost in different assignors will be used. In StickyTaskAssignor, it has a default value of 10 and rack.aware.assignment.traffic_cost has a default value of 1, which means maintaining stickiness is preferred in StickyTaskAssignor. In HighAvailabilityTaskAssignor, it has a default value of 1 and rack.aware.assignment.traffic_cost has a default value of 10, which means minimizing cross rack traffic is preferred in HighAvailabilityTaskAssignor.

      rack.aware.assignment.strategy

      This configuration sets the strategy Kafka Streams uses for rack aware task assignment so that cross traffic from broker to client can be reduced. This config will only take effect when broker.rack is set on the brokers and client.rack is set on Kafka Streams side. There are two settings for this config:

      • none. This is the default value which means rack aware task assignment will be disabled.
      • min_traffic. This settings means that the rack aware task assigner will compute an assignment which tries to minimize cross rack traffic.
      • balance_subtopology. This settings means that the rack aware task assigner will compute an assignment which will try to balance tasks from same subtopology to different clients and minimize cross rack traffic on top of that.

      This config can be used together with rack.aware.assignment.non_overlap_cost and rack.aware.assignment.traffic_cost to balance reducing cross rack traffic and maintaining the existing assignment.

      rack.aware.assignment.tags

      This configuration sets a list of tag keys used to distribute standby replicas across Kafka Streams clients. When configured, Kafka Streams will make a best-effort to distribute the standby tasks over clients with different tag values.

      Tags for the Kafka Streams clients can be set via client.tag. prefix. Example:

      Client-1                                   | Client-2
      _______________________________________________________________________
      client.tag.zone: eu-central-1a             | client.tag.zone: eu-central-1b
      client.tag.cluster: k8s-cluster1           | client.tag.cluster: k8s-cluster1
      rack.aware.assignment.tags: zone,cluster   | rack.aware.assignment.tags: zone,cluster
      
      
      Client-3                                   | Client-4
      _______________________________________________________________________
      client.tag.zone: eu-central-1a             | client.tag.zone: eu-central-1b
      client.tag.cluster: k8s-cluster2           | client.tag.cluster: k8s-cluster2
      rack.aware.assignment.tags: zone,cluster   | rack.aware.assignment.tags: zone,cluster
      

      In the above example, we have four Kafka Streams clients across two zones (eu-central-1a, eu-central-1b) and across two clusters (k8s-cluster1, k8s-cluster2). For an active task located on Client-1, Kafka Streams will allocate a standby task on Client-4, since Client-4 has a different zone and a different cluster than Client-1.

      rack.aware.assignment.traffic_cost

      This configuration sets the cost of cross rack traffic. Together with rack.aware.assignment.non_overlap_cost, they control whether the optimizer favors minimizing cross rack traffic or minimizing the movement of tasks in the existing assignment. If this config is set to a larger value than rack.aware.assignment.non_overlap_cost, the optimizer will try to compute an assignment which minimize the cross rack traffic. Note that the optimizer takes the ratio of these two configs into consideration of favoring maintaining existing assignment or minimizing traffic cost. For example, setting rack.aware.assignment.traffic_cost to 10 and rack.aware.assignment.non_overlap_cost to 1 is more likely to minimize cross rack traffic than setting rack.aware.assignment.traffic_cost to 100 and rack.aware.assignment.non_overlap_cost to 50.

      The default value is null which means default traffic cost in different assignors will be used. In StickyTaskAssignor, it has a default value of 1 and rack.aware.assignment.non_overlap_cost has a default value of 10. In HighAvailabilityTaskAssignor, it has a default value of 10 and rack.aware.assignment.non_overlap_cost has a default value of 1.

      log.summary.interval.ms

      This configuration controls the output interval for summary information. If greater or equal to 0, the summary log will be output according to the set time interval; If less than 0, summary output is disabled.

      enable.metrics.push

      Kafka Streams metrics can be pushed to the brokers similar to client metrics. Additionally, Kafka Streams allows to enable/disable metric pushing for each embedded client individually. However, pushing Kafka Streams metrics requires that enable.metric.push is enabled on the main-consumer and admin client.

      max.task.idle.ms

      This configuration controls how long Streams will wait to fetch data in order to provide in-order processing semantics.

      When processing a task that has multiple input partitions (as in a join or merge), Streams needs to choose which partition to process the next record from. When all input partitions have locally buffered data, Streams picks the partition whose next record has the lowest timestamp. This has the desirable effect of collating the input partitions in timestamp order, which is generally what you want in a streaming join or merge. However, when Streams does not have any data buffered locally for one of the partitions, it does not know whether the next record for that partition will have a lower or higher timestamp than the remaining partitions’ records.

      There are two cases to consider: either there is data in that partition on the broker that Streams has not fetched yet, or Streams is fully caught up with that partition on the broker, and the producers simply haven’t produced any new records since Streams polled the last batch.

      The default value of 0 causes Streams to delay processing a task when it detects that it has no locally buffered data for a partition, but there is data available on the brokers. Specifically, when there is an empty partition in the local buffer, but Streams has a non-zero lag for that partition. However, as soon as Streams catches up to the broker, it will continue processing, even if there is no data in one of the partitions. That is, it will not wait for new data to be produced. This default is designed to sacrifice some throughput in exchange for intuitively correct join semantics.

      Any config value greater than zero indicates the number of extra milliseconds that Streams will wait if it has a caught-up but empty partition. In other words, this is the amount of time to wait for new data to be produced to the input partitions to ensure in-order processing of data in the event of a slow producer.

      The config value of -1 indicates that Streams will never wait to buffer empty partitions before choosing the next record by timestamp, which achieves maximum throughput at the expense of introducing out-of-order processing.

      max.warmup.replicas

      The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker traffic and cluster state can be used for high availability. Increasing this will allow Streams to warm up more tasks at once, speeding up the time for the reassigned warmups to restore sufficient state for them to be transitioned to active tasks. Must be at least 1.

      Note that one warmup replica corresponds to one Stream Task. Furthermore, note that each warmup task can only be promoted to an active task during a rebalance (normally during a so-called probing rebalance, which occur at a frequency specified by the probing.rebalance.interval.ms config). This means that the maximum rate at which active tasks can be migrated from one Kafka Streams instance to another instance can be determined by (max.warmup.replicas / probing.rebalance.interval.ms).

      num.standby.replicas

      The number of standby replicas. Standby replicas are shadow copies of local state stores. Kafka Streams attempts to create the specified number of replicas per store and keep them up to date as long as there are enough instances running. Standby replicas are used to minimize the latency of task failover. A task that was previously running on a failed instance is preferred to restart on an instance that has standby replicas so that the local state store restoration process from its changelog can be minimized. Details about how Kafka Streams makes use of the standby replicas to minimize the cost of resuming tasks on failover can be found in the State section.

      Recommendation: Increase the number of standbys to 1 to get instant fail-over, i.e., high-availability. Increasing the number of standbys requires more client-side storage space. For example, with 1 standby, 2x space is required.

      Note: If you enable n standby tasks, you need to provision n+1 KafkaStreams instances.

      num.stream.threads

      This specifies the number of stream threads in an instance of the Kafka Streams application. The stream processing code runs in these thread. For more information about Kafka Streams threading model, see Threading Model.

      probing.rebalance.interval.ms

      The maximum time to wait before triggering a rebalance to probe for warmup replicas that have restored enough to be considered caught up. Streams will only assign stateful active tasks to instances that are caught up and within the acceptable.recovery.lag, if any exist. Probing rebalances are used to query the latest total lag of warmup replicas and transition them to active tasks if ready. They will continue to be triggered as long as there are warmup tasks, and until the assignment is balanced. Must be at least 1 minute.

      processing.exception.handler

      The processing exception handler allows you to manage exceptions triggered during the processing of a record. The implemented exception handler needs to return a FAIL or CONTINUE depending on the record and the exception thrown. Returning FAIL will signal that Streams should shut down and CONTINUE will signal that Streams should ignore the issue and continue processing. The following library built-in exception handlers are available:

      • LogAndContinueProcessingExceptionHandler: This handler logs the processing exception and then signals the processing pipeline to continue processing more records. This log-and-skip strategy allows Kafka Streams to make progress instead of failing if there are records that fail to be processed.
      • LogAndFailProcessingExceptionHandler. This handler logs the processing exception and then signals the processing pipeline to stop processing more records.

      You can also provide your own customized exception handler besides the library provided ones to meet your needs. For example, you can choose to forward corrupt records into a quarantine topic (think: a “dead letter queue”) for further processing. To do this, use the Producer API to write a corrupted record directly to the quarantine topic. To be more concrete, you can create a separate KafkaProducer object outside the Streams client, and pass in this object as well as the dead letter queue topic name into the Properties map, which then can be retrieved from the configure function call. The drawback of this approach is that “manual” writes are side effects that are invisible to the Kafka Streams runtime library, so they do not benefit from the end-to-end processing guarantees of the Streams API:

      public class SendToDeadLetterQueueExceptionHandler implements ProcessingExceptionHandler {
          KafkaProducer<byte[], byte[]> dlqProducer;
          String dlqTopic;
      
          @Override
          public ProcessingHandlerResponse handle(final ErrorHandlerContext context,
                                                  final Record record,
                                                  final Exception exception) {
      
              log.warn("Exception caught during message processing, sending to the dead queue topic; " +
                  "processor node: {}, taskId: {}, source topic: {}, source partition: {}, source offset: {}",
                  context.processorNodeId(), context.taskId(), context.topic(), context.partition(), context.offset(),
                  exception);
      
              dlqProducer.send(new ProducerRecord<>(dlqTopic, null, record.timestamp(), (byte[]) record.key(), (byte[]) record.value(), record.headers()));
      
              return ProcessingHandlerResponse.CONTINUE;
          }
      
          @Override
          public void configure(final Map<String, ?> configs) {
              dlqProducer = .. // get a producer from the configs map
              dlqTopic = .. // get the topic name from the configs map
          }
      }
      

      processing.guarantee

      The processing guarantee that should be used. Possible values are "at_least_once" (default) and "exactly_once_v2" (for EOS version 2). Deprecated config options are "exactly_once" (for EOS alpha), and "exactly_once_beta" (for EOS version 2). Using "exactly_once_v2" (or the deprecated "exactly_once_beta") requires broker version 2.5 or newer, while using the deprecated "exactly_once" requires broker version 0.11.0 or newer. Note that if exactly-once processing is enabled, the default for parameter commit.interval.ms changes to 100ms. Additionally, consumers are configured with isolation.level="read_committed" and producers are configured with enable.idempotence=true per default. Note that by default exactly-once processing requires a cluster of at least three brokers what is the recommended setting for production. For development, you can change this configuration by adjusting broker setting transaction.state.log.replication.factor and transaction.state.log.min.isr to the number of brokers you want to use. For more details see Processing Guarantees.

      Recommendation: While it is technically possible to use EOS with any replication factor, using a replication factor lower than 3 effectively voids EOS. Thus it is strongly recommended to use a replication factor of 3 (together with min.in.sync.replicas=2). This recommendation applies to all topics (i.e. __transaction_state, __consumer_offsets, Kafka Streams internal topics, and user topics).

      processor.wrapper.class

      A class or class name implementing the ProcessorWrapper interface. This feature allows you to wrap any of the processors in the compiled topology, including both custom processor implementations and those created by Streams for DSL operators. This can be useful for logging or tracing implementations since it allows access to the otherwise-hidden processor context for DSL operators, and also allows for injecting additional debugging information to an entire application topology with just a single config.

      IMPORTANT: This MUST be passed in when creating the topology, and will not be applied unless passed in to the appropriate topology-building constructor. You should use the StreamsBuilder#new(TopologyConfig) constructor for DSL applications, and the Topology#new(TopologyConfig) constructor for PAPI applications.

      replication.factor

      This specifies the replication factor of internal topics that Kafka Streams creates when local states are used or a stream is repartitioned for aggregation. Replication is important for fault tolerance. Without replication even a single broker failure may prevent progress of the stream processing application. It is recommended to use a similar replication factor as source topics.

      Recommendation: Increase the replication factor to 3 to ensure that the internal Kafka Streams topic can tolerate up to 2 broker failures. Note that you will require more storage space as well (3x with the replication factor of 3).

      rocksdb.config.setter

      The RocksDB configuration. Kafka Streams uses RocksDB as the default storage engine for persistent stores. To change the default configuration for RocksDB, you can implement RocksDBConfigSetter and provide your custom class via rocksdb.config.setter.

      Here is an example that adjusts the memory size consumed by RocksDB.

      public static class CustomRocksDBConfig implements RocksDBConfigSetter {
          // This object should be a member variable so it can be closed in RocksDBConfigSetter#close.
          private org.rocksdb.Cache cache = new org.rocksdb.LRUCache(16 * 1024L * 1024L);
      
          @Override
          public void setConfig(final String storeName, final Options options, final Map<String, Object> configs) {
              // See #1 below.
              BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
              tableConfig.setBlockCache(cache);
              // See #2 below.
              tableConfig.setBlockSize(16 * 1024L);
              // See #3 below.
              tableConfig.setCacheIndexAndFilterBlocks(true);
              options.setTableFormatConfig(tableConfig);
              // See #4 below.
              options.setMaxWriteBufferNumber(2);
          }
      
          @Override
          public void close(final String storeName, final Options options) {
              // See #5 below.
              cache.close();
          }
      }
      
      Properties streamsSettings = new Properties();
      streamsConfig.put(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG, CustomRocksDBConfig.class);
      

      Notes for example:

      1. BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig(); Get a reference to the existing table config rather than create a new one, so you don’t accidentally overwrite defaults such as the BloomFilter, which is an important optimization.
      2. tableConfig.setBlockSize(16 * 1024L); Modify the default block size per these instructions from the RocksDB GitHub.
      3. tableConfig.setCacheIndexAndFilterBlocks(true); Do not let the index and filter blocks grow unbounded. For more information, see the RocksDB GitHub.
      4. options.setMaxWriteBufferNumber(2); See the advanced options in the RocksDB GitHub.
      5. cache.close(); To avoid memory leaks, you must close any objects you constructed that extend org.rocksdb.RocksObject. See RocksJava docs for more details.

      state.dir

      The state directory. Kafka Streams persists local states under the state directory. Each application has a subdirectory on its hosting machine that is located under the state directory. The name of the subdirectory is the application ID. The state stores associated with the application are created under this subdirectory. When running multiple instances of the same application on a single machine, this path must be unique for each such instance.

      task.assignor.class

      A task assignor class or class name implementing the org.apache.kafka.streams.processor.assignment.TaskAssignor interface. Defaults to the high-availability task assignor. One possible alternative implementation provided in Apache Kafka is the org.apache.kafka.streams.processor.assignment.assignors.StickyTaskAssignor, which was the default task assignor before KIP-441 and minimizes task movement at the cost of stateful task availability. Alternative implementations of the task assignment algorithm can be plugged into the application by implementing a custom TaskAssignor and setting this config to the name of the custom task assignor class.

      topology.optimization

      A configuration telling Kafka Streams if it should optimize the topology and what optimizations to apply. Acceptable values are: StreamsConfig.NO_OPTIMIZATION (none), StreamsConfig.OPTIMIZE (all) or a comma separated list of specific optimizations: StreamsConfig.REUSE_KTABLE_SOURCE_TOPICS (reuse.ktable.source.topics), StreamsConfig.MERGE_REPARTITION_TOPICS (merge.repartition.topics), StreamsConfig.SINGLE_STORE_SELF_JOIN (single.store.self.join).

      We recommend listing specific optimizations in the config for production code so that the structure of your topology will not change unexpectedly during upgrades of the Streams library.

      These optimizations include moving/reducing repartition topics and reusing the source topic as the changelog for source KTables. These optimizations will save on network traffic and storage in Kafka without changing the semantics of your applications. Enabling them is recommended.

      Note that as of 2.3, you need to do two things to enable optimizations. In addition to setting this config to StreamsConfig.OPTIMIZE, you’ll need to pass in your configuration properties when building your topology by using the overloaded StreamsBuilder.build(Properties) method. For example KafkaStreams myStream = new KafkaStreams(streamsBuilder.build(properties), properties).

      windowed.inner.class.serde

      Serde for the inner class of a windowed record. Must implement the org.apache.kafka.common.serialization.Serde interface.

      Note that this config is only used by plain consumer/producer clients that set a windowed de/serializer type via configs. For Kafka Streams applications that deal with windowed types, you must pass in the inner serde type when you instantiate the windowed serde object for your topology.

      upgrade.from

      The version you are upgrading from. It is important to set this config when performing a rolling upgrade to certain versions, as described in the upgrade guide. You should set this config to the appropriate version before bouncing your instances and upgrading them to the newer version. Once everyone is on the newer version, you should remove this config and do a second rolling bounce. It is only necessary to set this config and follow the two-bounce upgrade path when upgrading from below version 2.0, or when upgrading to 2.4+ from any version lower than 2.4.

      Kafka consumers, producer and admin client configuration parameters

      You can specify parameters for the Kafka consumers, producers, and admin client that are used internally. The consumer, producer and admin client settings are defined by specifying parameters in a StreamsConfig instance.

      In this example, the Kafka consumer session timeout is configured to be 60000 milliseconds in the Streams settings:

       Properties streamsSettings = new Properties();
       // Example of a "normal" setting for Kafka Streams
       streamsSettings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker-01:9092");
       // Customize the Kafka consumer settings of your Streams application
       streamsSettings.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 60000);
      

      Naming

      Some consumer, producer and admin client configuration parameters use the same parameter name, and Kafka Streams library itself also uses some parameters that share the same name with its embedded client. For example, send.buffer.bytes and receive.buffer.bytes are used to configure TCP buffers; request.timeout.ms and retry.backoff.ms control retries for client request. You can avoid duplicate names by prefix parameter names with consumer., producer., or admin. (e.g., consumer.send.buffer.bytes and producer.send.buffer.bytes).

       Properties streamsSettings = new Properties();
       // same value for consumer, producer, and admin client
       streamsSettings.put("PARAMETER_NAME", "value");
       // different values for consumer and producer
       streamsSettings.put("consumer.PARAMETER_NAME", "consumer-value");
       streamsSettings.put("producer.PARAMETER_NAME", "producer-value");
       streamsSettings.put("admin.PARAMETER_NAME", "admin-value");
       // alternatively, you can use
       streamsSettings.put(StreamsConfig.consumerPrefix("PARAMETER_NAME"), "consumer-value");
       streamsSettings.put(StreamsConfig.producerPrefix("PARAMETER_NAME"), "producer-value");
       streamsSettings.put(StreamsConfig.adminClientPrefix("PARAMETER_NAME"), "admin-value");
      

      You could further separate consumer configuration by adding different prefixes:

      • main.consumer. for main consumer which is the default consumer of stream source.
      • restore.consumer. for restore consumer which is in charge of state store recovery.
      • global.consumer. for global consumer which is used in global KTable construction.

      For example, if you only want to set restore consumer config without touching other consumers’ settings, you could simply use restore.consumer. to set the config.

       Properties streamsSettings = new Properties();
       // same config value for all consumer types
       streamsSettings.put("consumer.PARAMETER_NAME", "general-consumer-value");
       // set a different restore consumer config. This would make restore consumer take restore-consumer-value,
       // while main consumer and global consumer stay with general-consumer-value
       streamsSettings.put("restore.consumer.PARAMETER_NAME", "restore-consumer-value");
       // alternatively, you can use
       streamsSettings.put(StreamsConfig.restoreConsumerPrefix("PARAMETER_NAME"), "restore-consumer-value");
      

      Same applied to main.consumer. and main.consumer., if you only want to specify one consumer type config.

      Additionally, to configure the internal repartition/changelog topics, you could use the topic. prefix, followed by any of the standard topic configs.

       Properties streamsSettings = new Properties();
       // Override default for both changelog and repartition topics
       streamsSettings.put("topic.PARAMETER_NAME", "topic-value");
       // alternatively, you can use
       streamsSettings.put(StreamsConfig.topicPrefix("PARAMETER_NAME"), "topic-value");
      

      Default Values

      Kafka Streams uses different default values for some of the underlying client configs, which are summarized below. For detailed descriptions of these configs, see Producer Configs and Consumer Configs.

      Parameter NameCorresponding ClientStreams Default
      auto.offset.resetConsumerearliest
      linger.msProducer100
      max.poll.recordsConsumer1000
      client.id-<application.id>-<random-UUID>

      If EOS is enabled, other parameters have the following default values.

      Parameter NameCorresponding ClientStreams Default
      transaction.timeout.msProducer10000
      delivery.timeout.msProducerInteger.MAX_VALUE

      Parameters controlled by Kafka Streams

      Some parameters are not configurable by the user. If you supply a value that is different from the default value, your value is ignored. Below is a list of some of these parameters.

      Parameter NameCorresponding ClientStreams Default
      allow.auto.create.topicsConsumerfalse
      group.idConsumerapplication.id
      enable.auto.commitConsumerfalse
      partition.assignment.strategyConsumerStreamsPartitionAssignor

      If EOS is enabled, other parameters are set with the following values.

      Parameter NameCorresponding ClientStreams Default
      isolation.levelConsumerREAD_COMMITTED
      enable.idempotenceProducertrue

      client.id

      Kafka Streams uses the client.id parameter to compute derived client IDs for internal clients. If you don’t set client.id, Kafka Streams sets it to <application.id>-<random-UUID>.

      This value will be used to derive the client IDs of the following internal clients.

      Clientclient.id
      Consumer<client.id>-StreamThread-<threadIdx>-consumer
      Restore consumer<client.id>-StreamThread-<threadIdx>-restore-consumer
      Global consumer<client.id>-global-consumer
      ProducerFor Non-EOS and EOS v2: <client.id>-StreamThread-<threadIdx>-producer
      For EOS v1: <client.id>-StreamThread-<threadIdx>-<taskId>-producer
      Admin<client.id>-admin

      enable.auto.commit

      The consumer auto commit. To guarantee at-least-once processing semantics and turn off auto commits, Kafka Streams overrides this consumer config value to false. Consumers will only commit explicitly via commitSync calls when the Kafka Streams library or a user decides to commit the current processing state.

      Previous Next

      9.7.3 - Streams DSL

      Streams DSL

      The Kafka Streams DSL (Domain Specific Language) is built on top of the Streams Processor API. It is the recommended for most users, especially beginners. Most data processing operations can be expressed in just a few lines of DSL code.

      Table of Contents

      • Overview
      • Creating source streams from Kafka
      • Transform a stream
        • Stateless transformations
        • Stateful transformations
          • Aggregating
          • Joining
            • Join co-partitioning requirements
            • KStream-KStream Join
            • KTable-KTable Equi-Join
            • KTable-KTable Foreign-Key Join
            • KStream-KTable Join
            • KStream-GlobalKTable Join
          • Windowing
            • Hopping time windows
            • Tumbling time windows
            • Sliding time windows
            • Session Windows
            • Window Final Results
        • Applying processors (Processor API integration)
        • Transformers removal and migration to processors
      • Naming Operators in a Streams DSL application
      • Controlling KTable update rate
      • Using timestamp-based semantics for table processors
      • Writing streams back to Kafka
      • Testing a Streams application
      • Kafka Streams DSL for Scala
        • Sample Usage
        • Implicit Serdes
        • User-Defined Serdes

      Overview

      In comparison to the Processor API, only the DSL supports:

      • Built-in abstractions for streams and tables in the form of KStream, KTable, and GlobalKTable. Having first-class support for streams and tables is crucial because, in practice, most use cases require not just either streams or databases/tables, but a combination of both. For example, if your use case is to create a customer 360-degree view that is updated in real-time, what your application will be doing is transforming many input streams of customer-related events into an output table that contains a continuously updated 360-degree view of your customers.
      • Declarative, functional programming style with stateless transformations (e.g. map and filter) as well as stateful transformations such as aggregations (e.g. count and reduce), joins (e.g. leftJoin), and windowing (e.g. session windows).

      With the DSL, you can define processor topologies (i.e., the logical processing plan) in your application. The steps to accomplish this are:

      1. Specify one or more input streams that are read from Kafka topics.
      2. Compose transformations on these streams.
      3. Write the resulting output streams back to Kafka topics, or expose the processing results of your application directly to other applications through interactive queries (e.g., via a REST API).

      After the application is run, the defined processor topologies are continuously executed (i.e., the processing plan is put into action). A step-by-step guide for writing a stream processing application using the DSL is provided below.

      For a complete list of available API functionality, see also the Streams API docs.

      KStream

      Only the Kafka Streams DSL has the notion of a KStream.

      A KStream is an abstraction of a record stream , where each data record represents a self-contained datum in the unbounded data set. Using the table analogy, data records in a record stream are always interpreted as an “INSERT” -- think: adding more entries to an append-only ledger – because no record replaces an existing row with the same key. Examples are a credit card transaction, a page view event, or a server log entry.

      To illustrate, let’s imagine the following two data records are being sent to the stream:

      (“alice”, 1) –> (“alice”, 3)

      If your stream processing application were to sum the values per user, it would return 4 for alice. Why? Because the second data record would not be considered an update of the previous record. Compare this behavior of KStream to KTable below, which would return 3 for alice.

      KTable

      Only the Kafka Streams DSL has the notion of a KTable.

      A KTable is an abstraction of a changelog stream , where each data record represents an update. More precisely, the value in a data record is interpreted as an “UPDATE” of the last value for the same record key, if any (if a corresponding key doesn’t exist yet, the update will be considered an INSERT). Using the table analogy, a data record in a changelog stream is interpreted as an UPSERT aka INSERT/UPDATE because any existing row with the same key is overwritten. Also, null values are interpreted in a special way: a record with a null value represents a “DELETE” or tombstone for the record’s key.

      To illustrate, let’s imagine the following two data records are being sent to the stream:

      (“alice”, 1) –> (“alice”, 3)

      If your stream processing application were to sum the values per user, it would return 3 for alice. Why? Because the second data record would be considered an update of the previous record.

      Effects of Kafka’s log compaction: Another way of thinking about KStream and KTable is as follows: If you were to store a KTable into a Kafka topic, you’d probably want to enable Kafka’s log compaction feature, e.g. to save storage space.

      However, it would not be safe to enable log compaction in the case of a KStream because, as soon as log compaction would begin purging older data records of the same key, it would break the semantics of the data. To pick up the illustration example again, you’d suddenly get a 3 for alice instead of a 4 because log compaction would have removed the ("alice", 1) data record. Hence log compaction is perfectly safe for a KTable (changelog stream) but it is a mistake for a KStream (record stream).

      We have already seen an example of a changelog stream in the section streams and tables. Another example are change data capture (CDC) records in the changelog of a relational database, representing which row in a database table was inserted, updated, or deleted.

      KTable also provides an ability to look up current values of data records by keys. This table-lookup functionality is available through join operations (see also Joining in the Developer Guide) as well as through Interactive Queries.

      GlobalKTable

      Only the Kafka Streams DSL has the notion of a GlobalKTable.

      Like a KTable , a GlobalKTable is an abstraction of a changelog stream , where each data record represents an update.

      A GlobalKTable differs from a KTable in the data that they are being populated with, i.e. which data from the underlying Kafka topic is being read into the respective table. Slightly simplified, imagine you have an input topic with 5 partitions. In your application, you want to read this topic into a table. Also, you want to run your application across 5 application instances for maximum parallelism.

      • If you read the input topic into a KTable , then the “local” KTable instance of each application instance will be populated with data from only 1 partition of the topic’s 5 partitions.
      • If you read the input topic into a GlobalKTable , then the local GlobalKTable instance of each application instance will be populated with data from all partitions of the topic.

      GlobalKTable provides the ability to look up current values of data records by keys. This table-lookup functionality is available through join operations. Note that a GlobalKTable has no notion of time in contrast to a KTable.

      Benefits of global tables:

      • More convenient and/or efficient joins : Notably, global tables allow you to perform star joins, they support “foreign-key” lookups (i.e., you can lookup data in the table not just by record key, but also by data in the record values), and they are more efficient when chaining multiple joins. Also, when joining against a global table, the input data does not need to be co-partitioned.
      • Can be used to “broadcast” information to all the running instances of your application.

      Downsides of global tables:

      • Increased local storage consumption compared to the (partitioned) KTable because the entire topic is tracked.
      • Increased network and Kafka broker load compared to the (partitioned) KTable because the entire topic is read.

      Creating source streams from Kafka

      You can easily read data from Kafka topics into your application. The following operations are supported.

      Reading from KafkaDescription
      Stream
      • input topics -> KStream

      | Creates a KStream from the specified Kafka input topics and interprets the data as a record stream. A KStream represents a partitioned record stream. (details) In the case of a KStream, the local KStream instance of every application instance will be populated with data from only a subset of the partitions of the input topic. Collectively, across all application instances, all input topic partitions are read and processed.

      import org.apache.kafka.common.serialization.Serdes;
      import org.apache.kafka.streams.StreamsBuilder;
      import org.apache.kafka.streams.kstream.KStream;
      
      StreamsBuilder builder = new StreamsBuilder();
      
      KStream<String, Long> wordCounts = builder.stream(
          "word-counts-input-topic", /* input topic */
          Consumed.with(
            Serdes.String(), /* key serde */
            Serdes.Long()   /* value serde */
          );
      

      If you do not specify Serdes explicitly, the default Serdes from the configuration are used. You must specify Serdes explicitly if the key or value types of the records in the Kafka input topics do not match the configured default Serdes. For information about configuring default Serdes, available Serdes, and implementing your own custom Serdes see Data Types and Serialization. Several variants of stream exist. For example, you can specify a regex pattern for input topics to read from (note that all matching topics will be part of the same input topic group, and the work will not be parallelized for different topics if subscribed to in this way).
      Table

      • input topic -> KTable

      | Reads the specified Kafka input topic into a KTable. The topic is interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE (when the record value is not null) or as DELETE (when the value is null) for that key. (details) In the case of a KTable, the local KTable instance of every application instance will be populated with data from only a subset of the partitions of the input topic. Collectively, across all application instances, all input topic partitions are read and processed. You must provide a name for the table (more precisely, for the internal state store that backs the table). This is required for supporting interactive queries against the table. When a name is not provided the table will not be queryable and an internal name will be provided for the state store. If you do not specify Serdes explicitly, the default Serdes from the configuration are used. You must specify Serdes explicitly if the key or value types of the records in the Kafka input topics do not match the configured default Serdes. For information about configuring default Serdes, available Serdes, and implementing your own custom Serdes see Data Types and Serialization. Several variants of table exist, for example to specify the auto.offset.reset policy to be used when reading from the input topic.
      Global Table

      • input topic -> GlobalKTable

      | Reads the specified Kafka input topic into a GlobalKTable. The topic is interpreted as a changelog stream, where records with the same key are interpreted as UPSERT aka INSERT/UPDATE (when the record value is not null) or as DELETE (when the value is null) for that key. (details) In the case of a GlobalKTable, the local GlobalKTable instance of every application instance will be populated with data from all the partitions of the input topic. You must provide a name for the table (more precisely, for the internal state store that backs the table). This is required for supporting interactive queries against the table. When a name is not provided the table will not be queryable and an internal name will be provided for the state store.

      import org.apache.kafka.common.serialization.Serdes;
      import org.apache.kafka.streams.StreamsBuilder;
      import org.apache.kafka.streams.kstream.GlobalKTable;
      
      StreamsBuilder builder = new StreamsBuilder();
      
      GlobalKTable<String, Long> wordCounts = builder.globalTable(
          "word-counts-input-topic",
          Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as(
            "word-counts-global-store" /* table/store name */)
            .withKeySerde(Serdes.String()) /* key serde */
            .withValueSerde(Serdes.Long()) /* value serde */
          );
      

      You must specify Serdes explicitly if the key or value types of the records in the Kafka input topics do not match the configured default Serdes. For information about configuring default Serdes, available Serdes, and implementing your own custom Serdes see Data Types and Serialization. Several variants of globalTable exist to e.g. specify explicit Serdes.

      Transform a stream

      The KStream and KTable interfaces support a variety of transformation operations. Each of these operations can be translated into one or more connected processors into the underlying processor topology. Since KStream and KTable are strongly typed, all of these transformation operations are defined as generic functions where users could specify the input and output data types.

      Some KStream transformations may generate one or more KStream objects, for example: - filter and map on a KStream will generate another KStream - split on KStream can generate multiple KStreams

      Some others may generate a KTable object, for example an aggregation of a KStream also yields a KTable. This allows Kafka Streams to continuously update the computed value upon arrivals of out-of-order records after it has already been produced to the downstream transformation operators.

      All KTable transformation operations can only generate another KTable. However, the Kafka Streams DSL does provide a special function that converts a KTable representation into a KStream. All of these transformation methods can be chained together to compose a complex processor topology.

      These transformation operations are described in the following subsections:

      • Stateless transformations
      • Stateful transformations

      Stateless transformations

      Stateless transformations do not require state for processing and they do not require a state store associated with the stream processor. Kafka 0.11.0 and later allows you to materialize the result from a stateless KTable transformation. This allows the result to be queried through interactive queries. To materialize a KTable, each of the below stateless operations can be augmented with an optional queryableStoreName argument.

      TransformationDescription
      Branch
      • KStream -> BranchedKStream

      | Branch (or split) a KStream based on the supplied predicates into one or more KStream instances. (details) Predicates are evaluated in order. A record is placed to one and only one output stream on the first match: if the n-th predicate evaluates to true, the record is placed to n-th stream. If a record does not match any predicates, it will be routed to the default branch, or dropped if no default branch is created. Branching is useful, for example, to route records to different downstream topics.

      KStream<String, Long> stream = ...;
      Map<String, KStream<String, Long>> branches =
          stream.split(Named.as("Branch-"))
              .branch((key, value) -> key.startsWith("A"),  /* first predicate  */
                   Branched.as("A"))
              .branch((key, value) -> key.startsWith("B"),  /* second predicate */
                   Branched.as("B"))
              .defaultBranch(Branched.as("C"))              /* default branch */
      );
      
      // KStream branches.get("Branch-A") contains all records whose keys start with "A"
      // KStream branches.get("Branch-B") contains all records whose keys start with "B"
      // KStream branches.get("Branch-C") contains all other records
      

      Broadcast/Multicast

      • no operator

      | Broadcasting a KStream into multiple downstream operators. A record is sent to more than one operator by applying multiple operators to the same KStream instance.

      KStream<String, Long> stream = ...;
      KStream<...> stream1 = stream.map(...);
      KStream<...> stream2 = stream.mapValue(...);
      KStream<...> stream3 = stream.flatMap(...);
      

      Multicasting a KStream into multiple downstream operators. In contrast to branching , which sends each record to at most one downstream branch, a multicast may send a record to any number of downstream KStream instances. A multicast is implemented as a broadcast plus filters.

      KStream<String, Long> stream = ...;
      KStream<...> stream1 = stream.filter((key, value) -> key.startsWith("A")); // contains all records whose keys start with "A"
      KStream<...> stream2 = stream.filter((key, value) -> key.startsWith("AB")); // contains all records whose keys start with "AB" (subset of stream1)
      KStream<...> stream3 = stream.filter((key, value) -> key.contains("B")); // contains all records whose keys contains a "B" (superset of stream2)
      

      Filter

      • KStream -> KStream
      • KTable -> KTable

      | Evaluates a boolean function for each element and retains those for which the function returns true. (KStream details, KTable details)

      KStream<String, Long> stream = ...;
      
      // A filter that selects (keeps) only positive numbers
      KStream<String, Long> onlyPositives = stream.filter((key, value) -> value > 0);
      

      Inverse Filter

      • KStream -> KStream
      • KTable -> KTable

      | Evaluates a boolean function for each element and drops those for which the function returns true. (KStream details, KTable details)

      KStream<String, Long> stream = ...;
      
      // An inverse filter that discards any negative numbers or zero
      KStream<String, Long> onlyPositives = stream.filterNot((key, value) -> value <= 0);
      

      FlatMap

      • KStream -> KStream

      | Takes one record and produces zero, one, or more records. You can modify the record keys and values, including their types. (details) Marks the stream for data re-partitioning: Applying a grouping or a join after flatMap will result in re-partitioning of the records. If possible use flatMapValues instead, which will not cause data re-partitioning.

      KStream<Long, String> stream = ...;
      KStream<String, Integer> transformed = stream.flatMap(
           // Here, we generate two output records for each input record.
           // We also change the key and value types.
           // Example: (345L, "Hello") -> ("HELLO", 1000), ("hello", 9000)
          (key, value) -> {
            List<KeyValue<String, Integer>> result = new LinkedList<>();
            result.add(KeyValue.pair(value.toUpperCase(), 1000));
            result.add(KeyValue.pair(value.toLowerCase(), 9000));
            return result;
          }
        );
      

      FlatMapValues

      • KStream -> KStream

      | Takes one record and produces zero, one, or more records, while retaining the key of the original record. You can modify the record values and the value type. (details) flatMapValues is preferable to flatMap because it will not cause data re-partitioning. However, you cannot modify the key or key type like flatMap does.

      // Split a sentence into words.
      KStream<byte[], String> sentences = ...;
      KStream<byte[], String> words = sentences.flatMapValues(value -> Arrays.asList(value.split("\s+")));
      

      Foreach

      • KStream -> void
      • KStream -> void
      • KTable -> void

      | Terminal operation. Performs a stateless action on each record. (details) You would use foreach to cause side effects based on the input data (similar to peek) and then stop further processing of the input data (unlike peek, which is not a terminal operation). Note on processing guarantees: Any side effects of an action (such as writing to external systems) are not trackable by Kafka, which means they will typically not benefit from Kafka’s processing guarantees.

      KStream<String, Long> stream = ...;
      
      // Print the contents of the KStream to the local console.
      stream.foreach((key, value) -> System.out.println(key + " => " + value));
      

      GroupByKey

      • KStream -> KGroupedStream

      | Groups the records by the existing key. (details) Grouping is a prerequisite for aggregating a stream or a table and ensures that data is properly partitioned (“keyed”) for subsequent operations. When to set explicit Serdes: Variants of groupByKey exist to override the configured default Serdes of your application, which you must do if the key and/or value types of the resulting KGroupedStream do not match the configured default Serdes. Note Grouping vs. Windowing: A related operation is windowing, which lets you control how to “sub-group” the grouped records of the same key into so-called windows for stateful operations such as windowed aggregations or windowed joins. Causes data re-partitioning if and only if the stream was marked for re-partitioning. groupByKey is preferable to groupBy because it re-partitions data only if the stream was already marked for re-partitioning. However, groupByKey does not allow you to modify the key or key type like groupBy does.

      KStream<byte[], String> stream = ...;
      
      // Group by the existing key, using the application's configured
      // default serdes for keys and values.
      KGroupedStream<byte[], String> groupedStream = stream.groupByKey();
      
      // When the key and/or value types do not match the configured
      // default serdes, we must explicitly specify serdes.
      KGroupedStream<byte[], String> groupedStream = stream.groupByKey(
          Grouped.with(
            Serdes.ByteArray(), /* key */
            Serdes.String())     /* value */
        );  
      

      GroupBy

      • KStream -> KGroupedStream
      • KTable -> KGroupedTable

      | Groups the records by a new key, which may be of a different key type. When grouping a table, you may also specify a new value and value type. groupBy is a shorthand for selectKey(...).groupByKey(). (KStream details, KTable details) Grouping is a prerequisite for aggregating a stream or a table and ensures that data is properly partitioned (“keyed”) for subsequent operations. When to set explicit Serdes: Variants of groupBy exist to override the configured default Serdes of your application, which you must do if the key and/or value types of the resulting KGroupedStream or KGroupedTable do not match the configured default Serdes. Note Grouping vs. Windowing: A related operation is windowing, which lets you control how to “sub-group” the grouped records of the same key into so-called windows for stateful operations such as windowed aggregations or windowed joins. Always causes data re-partitioning: groupBy always causes data re-partitioning. If possible use groupByKey instead, which will re-partition data only if required.

      KStream<byte[], String> stream = ...;
      KTable<byte[], String> table = ...;
      
      // Group the stream by a new key and key type
      KGroupedStream<String, String> groupedStream = stream.groupBy(
          (key, value) -> value,
          Grouped.with(
            Serdes.String(), /* key (note: type was modified) */
            Serdes.String())  /* value */
        );
      
      // Group the table by a new key and key type, and also modify the value and value type.
      KGroupedTable<String, Integer> groupedTable = table.groupBy(
          (key, value) -> KeyValue.pair(value, value.length()),
          Grouped.with(
            Serdes.String(), /* key (note: type was modified) */
            Serdes.Integer()) /* value (note: type was modified) */
        );
      

      Cogroup

      • KGroupedStream -> CogroupedKStream
      • CogroupedKStream -> CogroupedKStream

      | Cogrouping allows to aggregate multiple input streams in a single operation. The different (already grouped) input streams must have the same key type and may have different values types. KGroupedStream#cogroup() creates a new cogrouped stream with a single input stream, while CogroupedKStream#cogroup() adds a grouped stream to an existing cogrouped stream. A CogroupedKStream may be windowed before it is aggregated. Cogroup does not cause a repartition as it has the prerequisite that the input streams are grouped. In the process of creating these groups they will have already been repartitioned if the stream was already marked for repartitioning.

      KStream<byte[], String> stream = ...;
                              KStream<byte[], String> stream2 = ...;
      
      // Group by the existing key, using the application's configured
      // default serdes for keys and values.
      KGroupedStream<byte[], String> groupedStream = stream.groupByKey();
      KGroupedStream<byte[], String> groupedStream2 = stream2.groupByKey();
      CogroupedKStream<byte[], String> cogroupedStream = groupedStream.cogroup(aggregator1).cogroup(groupedStream2, aggregator2);
      
      KTable<byte[], String> table = cogroupedStream.aggregate(initializer);
      
      KTable<byte[], String> table2 = cogroupedStream.windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofMillis(500))).aggregate(initializer);  
      

      Map

      • KStream -> KStream

      | Takes one record and produces one record. You can modify the record key and value, including their types. (details) Marks the stream for data re-partitioning: Applying a grouping or a join after map will result in re-partitioning of the records. If possible use mapValues instead, which will not cause data re-partitioning.

      KStream<byte[], String> stream = ...;
      
      // Note how we change the key and the key type (similar to `selectKey`)
      // as well as the value and the value type.
      KStream<String, Integer> transformed = stream.map(
          (key, value) -> KeyValue.pair(value.toLowerCase(), value.length()));
      

      Map (values only)

      • KStream -> KStream
      • KTable -> KTable

      | Takes one record and produces one record, while retaining the key of the original record. You can modify the record value and the value type. (KStream details, KTable details) mapValues is preferable to map because it will not cause data re-partitioning. However, it does not allow you to modify the key or key type like map does.

      KStream<byte[], String> stream = ...;
      
      KStream<byte[], String> uppercased = stream.mapValues(value -> value.toUpperCase());
      

      Merge

      • KStream -> KStream

      | Merges records of two streams into one larger stream. (details) There is no ordering guarantee between records from different streams in the merged stream. Relative order is preserved within each input stream though (ie, records within the same input stream are processed in order)

      KStream<byte[], String> stream1 = ...;
      
      KStream<byte[], String> stream2 = ...;
      
      KStream<byte[], String> merged = stream1.merge(stream2);  
      

      Peek

      • KStream -> KStream

      | Performs a stateless action on each record, and returns an unchanged stream. (details) You would use peek to cause side effects based on the input data (similar to foreach) and continue processing the input data (unlike foreach, which is a terminal operation). peek returns the input stream as-is; if you need to modify the input stream, use map or mapValues instead. peek is helpful for use cases such as logging or tracking metrics or for debugging and troubleshooting. Note on processing guarantees: Any side effects of an action (such as writing to external systems) are not trackable by Kafka, which means they will typically not benefit from Kafka’s processing guarantees.

      KStream<byte[], String> stream = ...;
      
      KStream<byte[], String> unmodifiedStream = stream.peek(
          (key, value) -> System.out.println("key=" + key + ", value=" + value));
      

      Print

      • KStream -> void

      | Terminal operation. Prints the records to System.out. See Javadocs for serde and toString() caveats. (details) Calling print() is the same as calling foreach((key, value) -> System.out.println(key + ", " + value)) print is mainly for debugging/testing purposes, and it will try to flush on each record print. Hence it should not be used for production usage if performance requirements are concerned.

      KStream<byte[], String> stream = ...;
      // print to sysout
      stream.print();
      
      // print to file with a custom label
      stream.print(Printed.toFile("streams.out").withLabel("streams"));  
      

      SelectKey

      • KStream -> KStream

      | Assigns a new key - possibly of a new key type - to each record. (details) Calling selectKey(mapper) is the same as calling map((key, value) -> mapper(key, value), value). Marks the stream for data re-partitioning: Applying a grouping or a join after selectKey will result in re-partitioning of the records.

      KStream<byte[], String> stream = ...;
      
      // Derive a new record key from the record's value.  Note how the key type changes, too.
      KStream<String, String> rekeyed = stream.selectKey((key, value) -> value.split(" ")[0])
      

      Table to Stream

      • KTable -> KStream

      | Get the changelog stream of this table. (details)

      KTable<byte[], String> table = ...;
      
      // Also, a variant of `toStream` exists that allows you
      // to select a new key for the resulting stream.
      KStream<byte[], String> stream = table.toStream();  
      

      Stream to Table

      • KStream -> KTable

      | Convert an event stream into a table, or say a changelog stream. (details)

      KStream<byte[], String> stream = ...;
      
      KTable<byte[], String> table = stream.toTable();  
      

      Repartition

      • KStream -> KStream

      | Manually trigger repartitioning of the stream with desired number of partitions. (details) repartition() is similar to through() however Kafka Streams will manage the topic for you. Generated topic is treated as internal topic, as a result data will be purged automatically as any other internal repartition topic. In addition, you can specify the desired number of partitions, which allows to easily scale in/out downstream sub-topologies. repartition() operation always triggers repartitioning of the stream, as a result it can be used with embedded Processor API methods (like transform() et al.) that do not trigger auto repartitioning when key changing operation is performed beforehand.

      KStream<byte[], String> stream = ... ;
      KStream<byte[], String> repartitionedStream = stream.repartition(Repartitioned.numberOfPartitions(10));  
      

      Stateful transformations

      Stateful transformations depend on state for processing inputs and producing outputs and require a state store associated with the stream processor. For example, in aggregating operations, a windowing state store is used to collect the latest aggregation results per window. In join operations, a windowing state store is used to collect all of the records received so far within the defined window boundary.

      Note: Following store types are used regardless of the possibly specified type (via the parameter materialized):

      Note, that state stores are fault-tolerant. In case of failure, Kafka Streams guarantees to fully restore all state stores prior to resuming the processing. See Fault Tolerance for further information.

      Available stateful transformations in the DSL include:

      • Aggregating
      • Joining
      • Windowing (as part of aggregations and joins)
      • Applying custom processors and transformers, which may be stateful, for Processor API integration

      The following diagram shows their relationships:

      Stateful transformations in the DSL.

      Here is an example of a stateful application: the WordCount algorithm.

      WordCount example:

      // Assume the record values represent lines of text.  For the sake of this example, you can ignore
      // whatever may be stored in the record keys.
      KStream<String, String> textLines = ...;
      
      KStream<String, Long> wordCounts = textLines
          // Split each text line, by whitespace, into words.  The text lines are the record
          // values, i.e. you can ignore whatever data is in the record keys and thus invoke
          // `flatMapValues` instead of the more generic `flatMap`.
          .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\W+")))
          // Group the stream by word to ensure the key of the record is the word.
          .groupBy((key, word) -> word)
          // Count the occurrences of each word (record key).
          //
          // This will change the stream type from `KGroupedStream<String, String>` to
          // `KTable<String, Long>` (word -> count).
          .count()
          // Convert the `KTable<String, Long>` into a `KStream<String, Long>`.
          .toStream();
      

      Aggregating

      After records are grouped by key via groupByKey or groupBy - and thus represented as either a KGroupedStream or a KGroupedTable, they can be aggregated via an operation such as reduce. Aggregations are key-based operations, which means that they always operate over records (notably record values) of the same key. You can perform aggregations on windowed or non-windowed data.

      TransformationDescription
      Aggregate
      • KGroupedStream -> KTable
      • KGroupedTable -> KTable

      | Rolling aggregation. Aggregates the values of (non-windowed) records by the grouped key or cogrouped. Aggregating is a generalization of reduce and allows, for example, the aggregate value to have a different type than the input values. (KGroupedStream details, KGroupedTable details KGroupedTable details) When aggregating a grouped stream , you must provide an initializer (e.g., aggValue = 0) and an “adder” aggregator (e.g., aggValue + curValue). When aggregating a grouped table , you must additionally provide a “subtractor” aggregator (think: aggValue - oldValue). When aggregating a cogrouped stream , the actual aggregators are provided for each input stream in the prior cogroup()calls, and thus you only need to provide an initializer (e.g., aggValue = 0) Several variants of aggregate exist, see Javadocs for details.

      KGroupedStream<byte[], String> groupedStream = ...;
      KGroupedTable<byte[], String> groupedTable = ...;
      
      // Aggregating a KGroupedStream (note how the value type changes from String to Long)
      KTable<byte[], Long> aggregatedStream = groupedStream.aggregate(
          () -> 0L, /* initializer */
          (aggKey, newValue, aggValue) -> aggValue + newValue.length(), /* adder */
          Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("aggregated-stream-store") /* state store name */
              .withValueSerde(Serdes.Long()); /* serde for aggregate value */
      
      // Aggregating a KGroupedTable (note how the value type changes from String to Long)
      KTable<byte[], Long> aggregatedTable = groupedTable.aggregate(
          () -> 0L, /* initializer */
          (aggKey, newValue, aggValue) -> aggValue + newValue.length(), /* adder */
          (aggKey, oldValue, aggValue) -> aggValue - oldValue.length(), /* subtractor */
          Materialized.<String, Long, KeyValueStore<Bytes, byte[]>>as("aggregated-table-store") /* state store name */
      	.withValueSerde(Serdes.Long()) /* serde for aggregate value */
      

      Detailed behavior of KGroupedStream:

      • Input records with null keys are ignored.
      • When a record key is received for the first time, the initializer is called (and called before the adder).
      • Whenever a record with a non-null value is received, the adder is called.

      Detailed behavior of KGroupedTable:

      • Input records with null keys are ignored.
      • When a record key is received for the first time, the initializer is called (and called before the adder and subtractor). Note that, in contrast to KGroupedStream, over time the initializer may be called more than once for a key as a result of having received input tombstone records for that key (see below).
      • When the first non-null value is received for a key (e.g., INSERT), then only the adder is called.
      • When subsequent non-null values are received for a key (e.g., UPDATE), then (1) the subtractor is called with the old value as stored in the table and (2) the adder is called with the new value of the input record that was just received. The subtractor is guaranteed to be called before the adder if the extracted grouping key of the old and new value is the same. The detection of this case depends on the correct implementation of the equals() method of the extracted key type. Otherwise, the order of execution for the subtractor and adder is not defined.
      • When a tombstone record - i.e. a record with a null value - is received for a key (e.g., DELETE), then only the subtractor is called. Note that, whenever the subtractor returns a null value itself, then the corresponding key is removed from the resulting KTable. If that happens, any next input record for that key will trigger the initializer again.

      See the example at the bottom of this section for a visualization of the aggregation semantics.
      Aggregate (windowed)

      • KGroupedStream -> KTable

      | Windowed aggregation. Aggregates the values of records, per window, by the grouped key. Aggregating is a generalization of reduce and allows, for example, the aggregate value to have a different type than the input values. (TimeWindowedKStream details, SessionWindowedKStream details) You must provide an initializer (e.g., aggValue = 0), “adder” aggregator (e.g., aggValue + curValue), and a window. When windowing based on sessions, you must additionally provide a “session merger” aggregator (e.g., mergedAggValue = leftAggValue + rightAggValue). The windowed aggregate turns a TimeWindowedKStream<K, V> or SessionWindowedKStream<K, V> into a windowed KTable<Windowed<K>, V>. Several variants of aggregate exist, see Javadocs for details.

      import java.time.Duration;
      KGroupedStream<String, Long> groupedStream = ...;
      
      // Aggregating with time-based windowing (here: with 5-minute tumbling windows)
      KTable<Windowed<String>, Long> timeWindowedAggregatedStream = groupedStream.windowedBy(Duration.ofMinutes(5))
          .aggregate(
              () -> 0L, /* initializer */
              (aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
              Materialized.<String, Long, WindowStore<Bytes, byte[]>>as("time-windowed-aggregated-stream-store") /* state store name */
              .withValueSerde(Serdes.Long())); /* serde for aggregate value */
      
      // Aggregating with time-based windowing (here: with 5-minute sliding windows and 30-minute grace period)
      KTable<Windowed<String>, Long> timeWindowedAggregatedStream = groupedStream.windowedBy(SlidingWindows.ofTimeDifferenceAndGrace(Duration.ofMinutes(5), Duration.ofMinutes(30)))
          .aggregate(
              () -> 0L, /* initializer */
              (aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
              Materialized.<String, Long, WindowStore<Bytes, byte[]>>as("time-windowed-aggregated-stream-store") /* state store name */
              .withValueSerde(Serdes.Long())); /* serde for aggregate value */
      
      // Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)
      KTable<Windowed<String>, Long> sessionizedAggregatedStream = groupedStream.windowedBy(SessionWindows.ofInactivityGapWithNoGrace(Duration.ofMinutes(5)).
          aggregate(
          	() -> 0L, /* initializer */
          	(aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
              (aggKey, leftAggValue, rightAggValue) -> leftAggValue + rightAggValue, /* session merger */
              Materialized.<String, Long, SessionStore<Bytes, byte[]>>as("sessionized-aggregated-stream-store") /* state store name */
              .withValueSerde(Serdes.Long())); /* serde for aggregate value */
      

      Detailed behavior:

      • The windowed aggregate behaves similar to the rolling aggregate described above. The additional twist is that the behavior applies per window.
      • Input records with null keys are ignored in general.
      • When a record key is received for the first time for a given window, the initializer is called (and called before the adder).
      • Whenever a record with a non-null value is received for a given window, the adder is called.
      • When using session windows: the session merger is called whenever two sessions are being merged.

      See the example at the bottom of this section for a visualization of the aggregation semantics.
      Count

      • KGroupedStream -> KTable
      • KGroupedTable -> KTable

      | Rolling aggregation. Counts the number of records by the grouped key. (KGroupedStream details, KGroupedTable details) Several variants of count exist, see Javadocs for details.

      KGroupedStream<String, Long> groupedStream = ...;
      KGroupedTable<String, Long> groupedTable = ...;
      
      // Counting a KGroupedStream
      KTable<String, Long> aggregatedStream = groupedStream.count();
      
      // Counting a KGroupedTable
      KTable<String, Long> aggregatedTable = groupedTable.count();
      

      Detailed behavior for KGroupedStream:

      • Input records with null keys or values are ignored.

      Detailed behavior for KGroupedTable:

      • Input records with null keys are ignored. Records with null values are not ignored but interpreted as “tombstones” for the corresponding key, which indicate the deletion of the key from the table.

      Count (windowed)

      • KGroupedStream -> KTable

      | Windowed aggregation. Counts the number of records, per window, by the grouped key. (TimeWindowedKStream details, SessionWindowedKStream details) The windowed count turns a TimeWindowedKStream<K, V> or SessionWindowedKStream<K, V> into a windowed KTable<Windowed<K>, V>. Several variants of count exist, see Javadocs for details.

      import java.time.Duration;
      KGroupedStream<String, Long> groupedStream = ...;
      
      // Counting a KGroupedStream with time-based windowing (here: with 5-minute tumbling windows)
      KTable<Windowed<String>, Long> aggregatedStream = groupedStream.windowedBy(
          TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(5))) /* time-based window */
          .count();
      
      // Counting a KGroupedStream with time-based windowing (here: with 5-minute sliding windows and 30-minute grace period)
      KTable<Windowed<String>, Long> aggregatedStream = groupedStream.windowedBy(
          SlidingWindows.ofTimeDifferenceAndGrace(Duration.ofMinutes(5), Duration.ofMinutes(30))) /* time-based window */
          .count();
      
      // Counting a KGroupedStream with session-based windowing (here: with 5-minute inactivity gaps)
      KTable<Windowed<String>, Long> aggregatedStream = groupedStream.windowedBy(
          SessionWindows.ofInactivityGapWithNoGrace(Duration.ofMinutes(5))) /* session window */
          .count();
      

      Detailed behavior:

      • Input records with null keys or values are ignored.

      Reduce

      • KGroupedStream -> KTable
      • KGroupedTable -> KTable

      | Rolling aggregation. Combines the values of (non-windowed) records by the grouped key. The current record value is combined with the last reduced value, and a new reduced value is returned. The result value type cannot be changed, unlike aggregate. (KGroupedStream details, KGroupedTable details) When reducing a grouped stream , you must provide an “adder” reducer (e.g., aggValue + curValue). When reducing a grouped table , you must additionally provide a “subtractor” reducer (e.g., aggValue - oldValue). Several variants of reduce exist, see Javadocs for details.

      KGroupedStream<String, Long> groupedStream = ...;
      KGroupedTable<String, Long> groupedTable = ...;
      
      // Reducing a KGroupedStream
      KTable<String, Long> aggregatedStream = groupedStream.reduce(
          (aggValue, newValue) -> aggValue + newValue /* adder */);
      
      // Reducing a KGroupedTable
      KTable<String, Long> aggregatedTable = groupedTable.reduce(
          (aggValue, newValue) -> aggValue + newValue, /* adder */
          (aggValue, oldValue) -> aggValue - oldValue /* subtractor */);
      

      Detailed behavior for KGroupedStream:

      • Input records with null keys are ignored in general.
      • When a record key is received for the first time, then the value of that record is used as the initial aggregate value.
      • Whenever a record with a non-null value is received, the adder is called.

      Detailed behavior for KGroupedTable:

      • Input records with null keys are ignored in general.
      • When a record key is received for the first time, then the value of that record is used as the initial aggregate value. Note that, in contrast to KGroupedStream, over time this initialization step may happen more than once for a key as a result of having received input tombstone records for that key (see below).
      • When the first non-null value is received for a key (e.g., INSERT), then only the adder is called.
      • When subsequent non-null values are received for a key (e.g., UPDATE), then (1) the subtractor is called with the old value as stored in the table and (2) the adder is called with the new value of the input record that was just received. The subtractor is guaranteed be called before the adder if the extracted grouping key of the old and new value is the same. The detection of this case depends on the correct implementation of the equals() method of the extracted key type. Otherwise, the order of execution for the subtractor and adder is not defined.
      • When a tombstone record - i.e. a record with a null value - is received for a key (e.g., DELETE), then only the subtractor is called. Note that, whenever the subtractor returns a null value itself, then the corresponding key is removed from the resulting KTable. If that happens, any next input record for that key will re-initialize its aggregate value.

      See the example at the bottom of this section for a visualization of the aggregation semantics.
      Reduce (windowed)

      • KGroupedStream -> KTable

      | Windowed aggregation. Combines the values of records, per window, by the grouped key. The current record value is combined with the last reduced value, and a new reduced value is returned. Records with null key or value are ignored. The result value type cannot be changed, unlike aggregate. (TimeWindowedKStream details, SessionWindowedKStream details) The windowed reduce turns a turns a TimeWindowedKStream<K, V> or a SessionWindowedKStream<K, V> into a windowed KTable<Windowed<K>, V>. Several variants of reduce exist, see Javadocs for details.

      import java.time.Duration;
      KGroupedStream<String, Long> groupedStream = ...;
      
      // Aggregating with time-based windowing (here: with 5-minute tumbling windows)
      KTable<Windowed<String>, Long> timeWindowedAggregatedStream = groupedStream.windowedBy(
        TimeWindows.ofSizeWithNoGrace(Duration.ofMinutes(5)) /* time-based window */)
        .reduce(
          (aggValue, newValue) -> aggValue + newValue /* adder */
        );
      
      // Aggregating with time-based windowing (here: with 5-minute sliding windows and 30-minute grace)
      KTable<Windowed<String>, Long> timeWindowedAggregatedStream = groupedStream.windowedBy(
        SlidingWindows.ofTimeDifferenceAndGrace(Duration.ofMinutes(5), Duration.ofMinutes(30))) /* time-based window */)
        .reduce(
          (aggValue, newValue) -> aggValue + newValue /* adder */
        );
      
      // Aggregating with session-based windowing (here: with an inactivity gap of 5 minutes)
      KTable<Windowed<String>, Long> sessionzedAggregatedStream = groupedStream.windowedBy(
        SessionWindows.ofInactivityGapWithNoGrace(Duration.ofMinutes(5))) /* session window */
        .reduce(
          (aggValue, newValue) -> aggValue + newValue /* adder */
        );
      

      Detailed behavior:

      • The windowed reduce behaves similar to the rolling reduce described above. The additional twist is that the behavior applies per window.
      • Input records with null keys are ignored in general.
      • When a record key is received for the first time for a given window, then the value of that record is used as the initial aggregate value.
      • Whenever a record with a non-null value is received for a given window, the adder is called.

      See the example at the bottom of this section for a visualization of the aggregation semantics.

      Example of semantics for stream aggregations: A KGroupedStream -> KTable example is shown below. The streams and the table are initially empty. Bold font is used in the column for “KTable aggregated” to highlight changed state. An entry such as (hello, 1) denotes a record with key hello and value 1. To improve the readability of the semantics table you can assume that all records are processed in timestamp order.

      // Key: word, value: count
      KStream<String, Integer> wordCounts = ...;
      
      KGroupedStream<String, Integer> groupedStream = wordCounts
          .groupByKey(Grouped.with(Serdes.String(), Serdes.Integer()));
      
      KTable<String, Integer> aggregated = groupedStream.aggregate(
          () -> 0, /* initializer */
          (aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
          Materialized.<String, Long, KeyValueStore<Bytes, byte[]>as("aggregated-stream-store" /* state store name */)
            .withKeySerde(Serdes.String()) /* key serde */
            .withValueSerde(Serdes.Integer()); /* serde for aggregate value */
      

      Note

      Impact of record caches : For illustration purposes, the column “KTable aggregated” below shows the table’s state changes over time in a very granular way. In practice, you would observe state changes in such a granular way only when record caches are disabled (default: enabled). When record caches are enabled, what might happen for example is that the output results of the rows with timestamps 4 and 5 would be compacted, and there would only be a single state update for the key kafka in the KTable (here: from (kafka 1) directly to (kafka, 3). Typically, you should only disable record caches for testing or debugging purposes - under normal circumstances it is better to leave record caches enabled.

      KStream wordCountsKGroupedStream groupedStreamKTable aggregated
      TimestampInput recordGroupingInitializer
      1(hello, 1)(hello, 1)0 (for hello)
      2(kafka, 1)(kafka, 1)0 (for kafka)
      3(streams, 1)(streams, 1)0 (for streams)
      4(kafka, 1)(kafka, 1)
      5(kafka, 1)(kafka, 1)
      6(streams, 1)(streams, 1)

      Example of semantics for table aggregations: A KGroupedTable -> KTable example is shown below. The tables are initially empty. Bold font is used in the column for “KTable aggregated” to highlight changed state. An entry such as (hello, 1) denotes a record with key hello and value 1. To improve the readability of the semantics table you can assume that all records are processed in timestamp order.

      // Key: username, value: user region (abbreviated to "E" for "Europe", "A" for "Asia")
      KTable<String, String> userProfiles = ...;
      
      // Re-group `userProfiles`.  Don't read too much into what the grouping does:
      // its prime purpose in this example is to show the *effects* of the grouping
      // in the subsequent aggregation.
      KGroupedTable<String, Integer> groupedTable = userProfiles
          .groupBy((user, region) -> KeyValue.pair(region, user.length()), Serdes.String(), Serdes.Integer());
      
      KTable<String, Integer> aggregated = groupedTable.aggregate(
          () -> 0, /* initializer */
          (aggKey, newValue, aggValue) -> aggValue + newValue, /* adder */
          (aggKey, oldValue, aggValue) -> aggValue - oldValue, /* subtractor */
          Materialized.<String, Long, KeyValueStore<Bytes, byte[]>as("aggregated-table-store" /* state store name */)
            .withKeySerde(Serdes.String()) /* key serde */
            .withValueSerde(Serdes.Integer()); /* serde for aggregate value */
      

      Note

      Impact of record caches : For illustration purposes, the column “KTable aggregated” below shows the table’s state changes over time in a very granular way. In practice, you would observe state changes in such a granular way only when record caches are disabled (default: enabled). When record caches are enabled, what might happen for example is that the output results of the rows with timestamps 4 and 5 would be compacted, and there would only be a single state update for the key kafka in the KTable (here: from (kafka 1) directly to (kafka, 3). Typically, you should only disable record caches for testing or debugging purposes - under normal circumstances it is better to leave record caches enabled.

      KTable userProfilesKGroupedTable groupedTableKTable aggregated
      TimestampInput recordInterpreted asGrouping
      1(alice, E)INSERT alice(E, 5)
      2(bob, A)INSERT bob(A, 3)
      3(charlie, A)INSERT charlie(A, 7)
      4(alice, A)UPDATE alice(A, 5)
      5(charlie, null)DELETE charlie(null, 7)
      6(null, E)ignored
      7(bob, E)UPDATE bob(E, 3)

      Joining

      Streams and tables can also be joined. Many stream processing applications in practice are coded as streaming joins. For example, applications backing an online shop might need to access multiple, updating database tables (e.g. sales prices, inventory, customer information) in order to enrich a new data record (e.g. customer transaction) with context information. That is, scenarios where you need to perform table lookups at very large scale and with a low processing latency. Here, a popular pattern is to make the information in the databases available in Kafka through so-called change data capture in combination with Kafka’s Connect API, and then implementing applications that leverage the Streams API to perform very fast and efficient local joins of such tables and streams, rather than requiring the application to make a query to a remote database over the network for each record. In this example, the KTable concept in Kafka Streams would enable you to track the latest state (e.g., snapshot) of each table in a local state store, thus greatly reducing the processing latency as well as reducing the load of the remote databases when doing such streaming joins.

      The following join operations are supported, see also the diagram in the overview section of Stateful Transformations. Depending on the operands, joins are either windowed joins or non-windowed joins.

      Join operandsType(INNER) JOINLEFT JOINOUTER JOIN
      KStream-to-KStreamWindowedSupportedSupportedSupported
      KTable-to-KTableNon-windowedSupportedSupportedSupported
      KTable-to-KTable Foreign-Key JoinNon-windowedSupportedSupportedNot Supported
      KStream-to-KTableNon-windowedSupportedSupportedNot Supported
      KStream-to-GlobalKTableNon-windowedSupportedSupportedNot Supported
      KTable-to-GlobalKTableN/ANot SupportedNot SupportedNot Supported

      Each case is explained in more detail in the subsequent sections.

      Join co-partitioning requirements

      For equi-joins, input data must be co-partitioned when joining. This ensures that input records with the same key from both sides of the join, are delivered to the same stream task during processing. It is your responsibility to ensure data co-partitioning when joining. Co-partitioning is not required when performing KTable-KTable Foreign-Key joins and Global KTable joins.

      The requirements for data co-partitioning are:

      • The input topics of the join (left side and right side) must have the same number of partitions.
      • All applications that write to the input topics must have the same partitioning strategy so that records with the same key are delivered to same partition number. In other words, the keyspace of the input data must be distributed across partitions in the same manner. This means that, for example, applications that use Kafka’s Java Producer API must use the same partitioner (cf. the producer setting "partitioner.class" aka ProducerConfig.PARTITIONER_CLASS_CONFIG), and applications that use the Kafka’s Streams API must use the same StreamPartitioner for operations such as KStream#to(). The good news is that, if you happen to use the default partitioner-related settings across all applications, you do not need to worry about the partitioning strategy.

      Why is data co-partitioning required? Because KStream-KStream, KTable-KTable, and KStream-KTable joins are performed based on the keys of records (e.g., leftRecord.key == rightRecord.key), it is required that the input streams/tables of a join are co-partitioned by key.

      There are two exceptions where co-partitioning is not required. For KStream-GlobalKTable joins joins, co-partitioning is not required because all partitions of the GlobalKTable’s underlying changelog stream are made available to each KafkaStreams instance. That is, each instance has a full copy of the changelog stream. Further, a KeyValueMapper allows for non-key based joins from the KStream to the GlobalKTable. KTable-KTable Foreign-Key joins also do not require co-partitioning. Kafka Streams internally ensures co-partitioning for Foreign-Key joins.

      Note

      Kafka Streams partly verifies the co-partitioning requirement: During the partition assignment step, i.e. at runtime, Kafka Streams verifies whether the number of partitions for both sides of a join are the same. If they are not, a TopologyBuilderException (runtime exception) is being thrown. Note that Kafka Streams cannot verify whether the partitioning strategy matches between the input streams/tables of a join - it is up to the user to ensure that this is the case.

      Ensuring data co-partitioning: If the inputs of a join are not co-partitioned yet, you must ensure this manually. You may follow a procedure such as outlined below. It is recommended to repartition the topic with fewer partitions to match the larger partition number of avoid bottlenecks. Technically it would also be possible to repartition the topic with more partitions to the smaller partition number. For stream-table joins, it’s recommended to repartition the KStream because repartitioning a KTable might result in a second state store. For table-table joins, you might also consider to size of the KTables and repartition the smaller KTable.

      1. Identify the input KStream/KTable in the join whose underlying Kafka topic has the smaller number of partitions. Let’s call this stream/table “SMALLER”, and the other side of the join “LARGER”. To learn about the number of partitions of a Kafka topic you can use, for example, the CLI tool bin/kafka-topics with the --describe option.

      2. Within your application, re-partition the data of “SMALLER”. You must ensure that, when repartitioning the data with repartition, the same partitioner is used as for “LARGER”.

       * If "SMALLER" is a KStream: `KStream#repartition(Repartitioned.numberOfPartitions(...))`.
       * If "SMALLER" is a KTable: `KTable#toStream#repartition(Repartitioned.numberOfPartitions(...).toTable())`.
      
      1. Within your application, perform the join between “LARGER” and the new stream/table.

      KStream-KStream Join

      KStream-KStream joins are always windowed joins, because otherwise the size of the internal state store used to perform the join - e.g., a sliding window or “buffer” - would grow indefinitely. For stream-stream joins it’s important to highlight that a new input record on one side will produce a join output for each matching record on the other side, and there can be multiple such matching records in a given join window (cf. the row with timestamp 15 in the join semantics table below, for example).

      Join output records are effectively created as follows, leveraging the user-supplied ValueJoiner:

      KeyValue<K, LV> leftRecord = ...;
      KeyValue<K, RV> rightRecord = ...;
      ValueJoiner<LV, RV, JV> joiner = ...;
      
      KeyValue<K, JV> joinOutputRecord = KeyValue.pair(
          leftRecord.key, /* by definition, leftRecord.key == rightRecord.key */
          joiner.apply(leftRecord.value, rightRecord.value)
        );
      
      TransformationDescription
      Inner Join (windowed)
      • (KStream, KStream) -> KStream

      | Performs an INNER JOIN of this stream with another stream. Even though this operation is windowed, the joined stream will be of type KStream<K, ...> rather than KStream<Windowed<K>, ...>. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned. Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned). Several variants of join exists, see the Javadocs for details.

      import java.time.Duration;
      KStream<String, Long> left = ...;
      KStream<String, Double> right = ...;
      
      KStream<String, String> joined = left.join(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue, /* ValueJoiner */
          JoinWindows.ofTimeDifferenceWithNoGrace(Duration.ofMinutes(5)),
          Joined.with(
            Serdes.String(), /* key */
            Serdes.Long(),   /* left value */
            Serdes.Double())  /* right value */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key, and window-based , i.e. two input records are joined if and only if their timestamps are “close” to each other as defined by the user-supplied JoinWindows, i.e. the window defines an additional join predicate over the record timestamps.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Input records with a `null` key or a `null` value are ignored and do not trigger the join.
      

      See the semantics overview at the bottom of this section for a detailed description.
      Left Join (windowed)

      • (KStream, KStream) -> KStream

      | Performs a LEFT JOIN of this stream with another stream. Even though this operation is windowed, the joined stream will be of type KStream<K, ...> rather than KStream<Windowed<K>, ...>. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned. Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned). Several variants of leftJoin exists, see the Javadocs for details.

      import java.time.Duration;
      KStream<String, Long> left = ...;
      KStream<String, Double> right = ...;
      
      KStream<String, String> joined = left.leftJoin(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue, /* ValueJoiner */
          JoinWindows.ofTimeDifferenceWithNoGrace(Duration.ofMinutes(5)),
          Joined.with(
            Serdes.String(), /* key */
            Serdes.Long(),   /* left value */
            Serdes.Double())  /* right value */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key, and window-based , i.e. two input records are joined if and only if their timestamps are “close” to each other as defined by the user-supplied JoinWindows, i.e. the window defines an additional join predicate over the record timestamps.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Input records with a `null` value are ignored and do not trigger the join.
      
      • For each input record on the left side that does not have any match on the right side, the ValueJoiner will be called with ValueJoiner#apply(leftRecord.value, null); this explains the row with timestamp=60 and timestampe=80 in the table below, which lists [E, null] and [F, null]in the LEFT JOIN column. Note that these left results are emitted after the specified grace period passed. Caution: using the deprecated JoinWindows.of(...).grace(...) API might result in eagerly emitted spurious left results.

      See the semantics overview at the bottom of this section for a detailed description.
      Outer Join (windowed)

      • (KStream, KStream) -> KStream

      | Performs an OUTER JOIN of this stream with another stream. Even though this operation is windowed, the joined stream will be of type KStream<K, ...> rather than KStream<Windowed<K>, ...>. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned. Causes data re-partitioning of a stream if and only if the stream was marked for re-partitioning (if both are marked, both are re-partitioned). Several variants of outerJoin exists, see the Javadocs for details.

      import java.time.Duration;
      KStream<String, Long> left = ...;
      KStream<String, Double> right = ...;
      
      KStream<String, String> joined = left.outerJoin(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue, /* ValueJoiner */
          JoinWindows.ofTimeDifferenceWithNoGrace(Duration.ofMinutes(5)),
          Joined.with(
            Serdes.String(), /* key */
            Serdes.Long(),   /* left value */
            Serdes.Double())  /* right value */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key, and window-based , i.e. two input records are joined if and only if their timestamps are “close” to each other as defined by the user-supplied JoinWindows, i.e. the window defines an additional join predicate over the record timestamps.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Input records with a `null` value are ignored and do not trigger the join.
      
      • For each input record on one side that does not have any match on the other side, the ValueJoiner will be called with ValueJoiner#apply(leftRecord.value, null) or ValueJoiner#apply(null, rightRecord.value), respectively; this explains the row with timestamp=60, timestamp=80, and timestamp=100 in the table below, which lists [E, null], [F, null], and [null, f] in the OUTER JOIN column. Note that these left and right results are emitted after the specified grace period passed. Caution: using the deprecated JoinWindows.of(...).grace(...) API might result in eagerly emitted spurious left/right results.

      See the semantics overview at the bottom of this section for a detailed description.

      Semantics of stream-stream joins: The semantics of the various stream-stream join variants are explained below. To improve the readability of the table, assume that (1) all records have the same key (and thus the key in the table is omitted), and (2) all records are processed in timestamp order. We assume a join window size of 15 seconds with a grace period of 5 seconds.

      Note: If you use the old and now deprecated API to specify the grace period, i.e., JoinWindows.of(...).grace(...), left/outer join results are emitted eagerly, and the observed result might differ from the result shown below.

      The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied ValueJoiner for the join, leftJoin, and outerJoin methods, respectively, whenever a new input record is received on either side of the join. An empty table cell denotes that the ValueJoiner is not called at all.

      TimestampLeft (KStream)Right (KStream)(INNER) JOINLEFT JOINOUTER JOIN
      1null
      2null
      3A
      4a[A, a][A, a][A, a]
      5B[B, a][B, a][B, a]
      6b[A, b], [B, b][A, b], [B, b][A, b], [B, b]
      7null
      8null
      9C[C, a], [C, b][C, a], [C, b][C, a], [C, b]
      10c[A, c], [B, c], [C, c][A, c], [B, c], [C, c][A, c], [B, c], [C, c]
      11null
      12null
      13null
      14d[A, d], [B, d], [C, d][A, d], [B, d], [C, d][A, d], [B, d], [C, d]
      15D[D, a], [D, b], [D, c], [D, d][D, a], [D, b], [D, c], [D, d][D, a], [D, b], [D, c], [D, d]
      40E
      60F[E, null][E, null]
      80f[F, null][F, null]
      100G[null, f]

      KTable-KTable Equi-Join

      KTable-KTable equi-joins are always non-windowed joins. They are designed to be consistent with their counterparts in relational databases. The changelog streams of both KTables are materialized into local state stores to represent the latest snapshot of their table duals. The join result is a new KTable that represents the changelog stream of the join operation.

      Join output records are effectively created as follows, leveraging the user-supplied ValueJoiner:

      KeyValue<K, LV> leftRecord = ...;
      KeyValue<K, RV> rightRecord = ...;
      ValueJoiner<LV, RV, JV> joiner = ...;
      
      KeyValue<K, JV> joinOutputRecord = KeyValue.pair(
          leftRecord.key, /* by definition, leftRecord.key == rightRecord.key */
          joiner.apply(leftRecord.value, rightRecord.value)
        );
      
      TransformationDescription
      Inner Join
      • (KTable, KTable) -> KTable

      | Performs an INNER JOIN of this table with another table. The result is an ever-updating KTable that represents the “current” result of the join. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned.

      KTable<String, Long> left = ...;
      KTable<String, Double> right = ...;
      
      KTable<String, String> joined = left.join(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Input records with a `null` key are ignored and do not trigger the join.
      * Input records with a `null` value are interpreted as _tombstones_ for the corresponding key, which indicate the deletion of the key from the table. Tombstones do not trigger the join. When an input tombstone is received, then an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding key actually exists already in the join result KTable).
      * When joining versioned tables, out-of-order input records, i.e., those for which another record from the same table, with the same key and a larger timestamp, has already been processed, are ignored and do not trigger the join.
      

      See the semantics overview at the bottom of this section for a detailed description.
      Left Join

      • (KTable, KTable) -> KTable

      | Performs a LEFT JOIN of this table with another table. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned.

      KTable<String, Long> left = ...;
      KTable<String, Double> right = ...;
      
      KTable<String, String> joined = left.leftJoin(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Input records with a `null` key are ignored and do not trigger the join.
      * Input records with a `null` value are interpreted as _tombstones_ for the corresponding key, which indicate the deletion of the key from the table. Right-tombstones trigger the join, but left-tombstones don't: when an input tombstone is received, an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding key actually exists already in the join result KTable).
      * When joining versioned tables, out-of-order input records, i.e., those for which another record from the same table, with the same key and a larger timestamp, has already been processed, are ignored and do not trigger the join.
      
      • For each input record on the left side that does not have any match on the right side, the ValueJoiner will be called with ValueJoiner#apply(leftRecord.value, null); this explains the row with timestamp=3 in the table below, which lists [A, null] in the LEFT JOIN column.

      See the semantics overview at the bottom of this section for a detailed description.
      Outer Join

      • (KTable, KTable) -> KTable

      | Performs an OUTER JOIN of this table with another table. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned.

      KTable<String, Long> left = ...;
      KTable<String, Double> right = ...;
      
      KTable<String, String> joined = left.outerJoin(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Input records with a `null` key are ignored and do not trigger the join.
      * Input records with a `null` value are interpreted as _tombstones_ for the corresponding key, which indicate the deletion of the key from the table. Tombstones may trigger joins, depending on the content in the left and right tables. When an input tombstone is received, an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding key actually exists already in the join result KTable).
      * When joining versioned tables, out-of-order input records, i.e., those for which another record from the same table, with the same key and a larger timestamp, has already been processed, are ignored and do not trigger the join.
      
      • For each input record on one side that does not have any match on the other side, the ValueJoiner will be called with ValueJoiner#apply(leftRecord.value, null) or ValueJoiner#apply(null, rightRecord.value), respectively; this explains the rows with timestamp=3 and timestamp=7 in the table below, which list [A, null] and [null, b], respectively, in the OUTER JOIN column.

      See the semantics overview at the bottom of this section for a detailed description.

      Semantics of table-table equi-joins: The semantics of the various table-table equi-join variants are explained below. To improve the readability of the table, you can assume that (1) all records have the same key (and thus the key in the table is omitted) and that (2) all records are processed in timestamp order. The columns INNER JOIN, LEFT JOIN, and OUTER JOIN denote what is passed as arguments to the user-supplied ValueJoiner for the join, leftJoin, and outerJoin methods, respectively, whenever a new input record is received on either side of the join. An empty table cell denotes that the ValueJoiner is not called at all.

      TimestampLeft (KTable)Right (KTable)(INNER) JOINLEFT JOINOUTER JOIN
      1null
      2null
      3A[A, null][A, null]
      4a[A, a][A, a][A, a]
      5B[B, a][B, a][B, a]
      6b[B, b][B, b][B, b]
      7nullnullnull[null, b]
      8nullnull
      9C[C, null][C, null]
      10c[C, c][C, c][C, c]
      11nullnull[C, null][C, null]
      12nullnullnull
      13null
      14d[null, d]
      15D[D, d][D, d][D, d]

      KTable-KTable Foreign-Key Join

      KTable-KTable foreign-key joins are always non-windowed joins. Foreign-key joins are analogous to joins in SQL. As a rough example:

      SELECT ... FROM {this KTable} JOIN {other KTable} ON {other.key} = {result of foreignKeyExtractor(this.value)} ...

      The output of the operation is a new KTable containing the join result.

      The changelog streams of both KTables are materialized into local state stores to represent the latest snapshot of their table duals. A foreign-key extractor function is applied to the left record, with a new intermediate record created and is used to lookup and join with the corresponding primary key on the right hand side table. The result is a new KTable that represents the changelog stream of the join operation.

      The left KTable can have multiple records which map to the same key on the right KTable. An update to a single left KTable entry may result in a single output event, provided the corresponding key exists in the right KTable. Consequently, a single update to a right KTable entry will result in an update for each record in the left KTable that has the same foreign key.

      TransformationDescription
      Inner Join
      • (KTable, KTable) -> KTable

      | Performs a foreign-key INNER JOIN of this table with another table. The result is an ever-updating KTable that represents the “current” result of the join. (details)

      KTable<String, Long> left = ...;
                      KTable<Long, Double> right = ...;
      //This foreignKeyExtractor simply uses the left-value to map to the right-key.
      Function<Long, Long> foreignKeyExtractor = (v) -> v;
      //Alternative: with access to left table key
      BiFunction<String, Long, Long> foreignKeyExtractor = (k, v) -> v;
      
                      KTable<String, String> joined = left.join(right, foreignKeyExtractor,
                          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
                        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate:

        foreignKeyExtractor.apply(leftRecord.value) == rightRecord.key
        
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.

      * Records for which the `foreignKeyExtractor` produces `null` are ignored and do not trigger a join. If you want to join with `null` foreign keys, use a suitable sentinel value to do so (i.e. `"NULL"` for a String field, or `-1` for an auto-incrementing integer field). 
      * Input records with a `null` value are interpreted as _tombstones_ for the corresponding key, which indicate the deletion of the key from the table. Tombstones do not trigger the join. When an input tombstone is received, then an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding key actually exists already in the join result KTable).
      * When joining versioned tables, out-of-order input records, i.e., those for which another record from the same table, with the same key and a larger timestamp, has already been processed, are ignored and do not trigger the join.
      

      See the semantics overview at the bottom of this section for a detailed description.
      Left Join

      • (KTable, KTable) -> KTable

      | Performs a foreign-key LEFT JOIN of this table with another table. (details)

      KTable<String, Long> left = ...;
                      KTable<Long, Double> right = ...;
      //This foreignKeyExtractor simply uses the left-value to map to the right-key.
      Function<Long, Long> foreignKeyExtractor = (v) -> v;
      //Alternative: with access to left table key
      BiFunction<String, Long, Long> foreignKeyExtractor = (k, v) -> v;
      
                      KTable<String, String> joined = left.leftJoin(right, foreignKeyExtractor,
                          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
                        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate:

        foreignKeyExtractor.apply(leftRecord.value) == rightRecord.key
        
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.

      * Input records with a `null` value are interpreted as _tombstones_ for the corresponding key, which indicate the deletion of the key from the table. Right-tombstones trigger the join, but left-tombstones don't: when an input tombstone is received, then an output tombstone is forwarded directly to the join result KTable if required (i.e. only if the corresponding key actually exists already in the join result KTable).
      * When joining versioned tables, out-of-order input records, i.e., those for which another record from the same table, with the same key and a larger timestamp, has already been processed, are ignored and do not trigger the join.
      
      • For each input record on the left side that does not have any match on the right side, the ValueJoiner will be called with ValueJoiner#apply(leftRecord.value, null); this explains the row with timestamp=7 & 8 in the table below, which lists (q,10,null) and (r,10,null) in the LEFT JOIN column.

      See the semantics overview at the bottom of this section for a detailed description.

      Semantics of table-table foreign-key joins: The semantics of the table-table foreign-key INNER and LEFT JOIN variants are demonstrated below. The key is shown alongside the value for each record. Records are processed in incrementing offset order. The columns INNER JOIN and LEFT JOIN denote what is passed as arguments to the user-supplied ValueJoiner for the join and leftJoin methods, respectively, whenever a new input record is received on either side of the join. An empty table cell denotes that the ValueJoiner is not called at all. For the purpose of this example, Function foreignKeyExtractor simply uses the left-value as the output.

      Record OffsetLeft KTable (K, extracted-FK)Right KTable (FK, VR)(INNER) JOINLEFT JOIN
      1(k,1)(1,foo)(k,1,foo)
      (k,1,foo)
      2(k,2)
      (k,null)(k,2,null)

      3 | (k,3)
      | | (k,null) | (k,3,null)

      4 | | (3,bar)
      | (k,3,bar)
      | (k,3,bar)

      5 | (k,null)
      | | (k,null)
      | (k,null,null)
      6 | (k,1) |
      | (k,1,foo)
      | (k,1,foo)

      7 | (q,10)
      | |
      | (q,10,null)
      8 | (r,10) |
      | | (r,10,null)
      9 |
      | (10,baz) | (q,10,baz), (r,10,baz) | (q,10,baz), (r,10,baz)

      KStream-KTable Join

      KStream-KTable joins are always non-windowed joins. They allow you to perform table lookups against a KTable (changelog stream) upon receiving a new record from the KStream (record stream). An example use case would be to enrich a stream of user activities (KStream) with the latest user profile information (KTable).

      Join output records are effectively created as follows, leveraging the user-supplied ValueJoiner:

      KeyValue<K, LV> leftRecord = ...;
      KeyValue<K, RV> rightRecord = ...;
      ValueJoiner<LV, RV, JV> joiner = ...;
      
      KeyValue<K, JV> joinOutputRecord = KeyValue.pair(
          leftRecord.key, /* by definition, leftRecord.key == rightRecord.key */
          joiner.apply(leftRecord.value, rightRecord.value)
        );
      
      TransformationDescription
      Inner Join
      • (KStream, KTable) -> KStream

      | Performs an INNER JOIN of this stream with the table, effectively doing a table lookup. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned. Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning. Several variants of join exists, see the Javadocs for details.

      KStream<String, Long> left = ...;
      KTable<String, Double> right = ...;
      
      KStream<String, String> joined = left.join(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue, /* ValueJoiner */
          Joined.keySerde(Serdes.String()) /* key */
            .withValueSerde(Serdes.Long()) /* left value */
            .withGracePeriod(Duration.ZERO) /* grace period */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Only input records for the left side (stream) trigger the join. Input records for the right side (table) update only the internal right-side join state.
      * Input records for the stream with a `null` key or a `null` value are ignored and do not trigger the join.
      * Input records for the table with a `null` value are interpreted as _tombstones_ for the corresponding key, which indicate the deletion of the key from the table. Tombstones do not trigger the join.
      
      • When the table is versioned, the table record to join with is determined by performing a timestamped lookup, i.e., the table record which is joined will be the latest-by-timestamp record with timestamp less than or equal to the stream record timestamp. If the stream record timestamp is older than the table’s history retention, then the record is dropped.
      • To use the grace period, the table needs to be versioned. This will cause the stream to buffer for the specified grace period before trying to find a matching record with the right timestamp in the table. The case where the grace period would be used for is if a record in the table has a timestamp less than or equal to the stream record timestamp but arrives after the stream record. If the table record arrives within the grace period the join will still occur. If the table record does not arrive before the grace period the join will continue as normal.

      See the semantics overview at the bottom of this section for a detailed description.
      Left Join

      • (KStream, KTable) -> KStream

      | Performs a LEFT JOIN of this stream with the table, effectively doing a table lookup. (details) Data must be co-partitioned : The input data for both sides must be co-partitioned. Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning. Several variants of leftJoin exists, see the Javadocs for details.

      KStream<String, Long> left = ...;
      KTable<String, Double> right = ...;
      
      KStream<String, String> joined = left.leftJoin(right,
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue, /* ValueJoiner */
          Joined.keySerde(Serdes.String()) /* key */
            .withValueSerde(Serdes.Long()) /* left value */
            .withGracePeriod(Duration.ZERO) /* grace period */
        );
      

      Detailed behavior:

      • The join is key-based , i.e. with the join predicate leftRecord.key == rightRecord.key.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Only input records for the left side (stream) trigger the join. Input records for the right side (table) update only the internal right-side join state.
      * Input records for the stream with a `null` value are ignored and do not trigger the join.
      * Input records for the table with a `null` value are interpreted as _tombstones_ for the corresponding key, which indicate the deletion of the key from the table. Tombstones do not trigger the join.
      
      • For each input record on the left side that does not have any match on the right side, the ValueJoiner will be called with ValueJoiner#apply(leftRecord.value, null); this explains the row with timestamp=3 in the table below, which lists [A, null] in the LEFT JOIN column.
      • When the table is versioned, the table record to join with is determined by performing a timestamped lookup, i.e., the table record which is joined will be the latest-by-timestamp record with timestamp less than or equal to the stream record timestamp. If the stream record timestamp is older than the table’s history retention, then the record that is joined will be null.
      • To use the grace period, the table needs to be versioned. This will cause the stream to buffer for the specified grace period before trying to find a matching record with the right timestamp in the table. The case where the grace period would be used for is if a record in the table has a timestamp less than or equal to the stream record timestamp but arrives after the stream record. If the table record arrives within the grace period the join will still occur. If the table record does not arrive before the grace period the join will continue as normal.

      See the semantics overview at the bottom of this section for a detailed description.

      Semantics of stream-table joins: The semantics of the various stream-table join variants are explained below. To improve the readability of the table we assume that (1) all records have the same key (and thus we omit the key in the table) and that (2) all records are processed in timestamp order. The columns INNER JOIN and LEFT JOIN denote what is passed as arguments to the user-supplied ValueJoiner for the join and leftJoin methods, respectively, whenever a new input record is received on either side of the join. An empty table cell denotes that the ValueJoiner is not called at all.

      TimestampLeft (KStream)Right (KTable)(INNER) JOINLEFT JOIN
      1null
      2null
      3A[A, null]
      4a
      5B[B, a][B, a]
      6b
      7null
      8null
      9C[C, null]
      10c
      11null
      12null
      13null
      14d
      15D[D, d][D, d]

      KStream-GlobalKTable Join

      KStream-GlobalKTable joins are always non-windowed joins. They allow you to perform table lookups against a GlobalKTable (entire changelog stream) upon receiving a new record from the KStream (record stream). An example use case would be “star queries” or “star joins”, where you would enrich a stream of user activities (KStream) with the latest user profile information (GlobalKTable) and further context information (further GlobalKTables). However, because GlobalKTables have no notion of time, a KStream-GlobalKTable join is not a temporal join, and there is no event-time synchronization between updates to a GlobalKTable and processing of KStream records.

      At a high-level, KStream-GlobalKTable joins are very similar to KStream-KTable joins. However, global tables provide you with much more flexibility at the some expense when compared to partitioned tables:

      • They do not require data co-partitioning.
      • They allow for efficient “star joins”; i.e., joining a large-scale “facts” stream against “dimension” tables
      • They allow for joining against foreign keys; i.e., you can lookup data in the table not just by the keys of records in the stream, but also by data in the record values.
      • They make many use cases feasible where you must work on heavily skewed data and thus suffer from hot partitions.
      • They are often more efficient than their partitioned KTable counterpart when you need to perform multiple joins in succession.

      Join output records are effectively created as follows, leveraging the user-supplied ValueJoiner:

      KeyValue<K, LV> leftRecord = ...;
      KeyValue<K, RV> rightRecord = ...;
      ValueJoiner<LV, RV, JV> joiner = ...;
      
      KeyValue<K, JV> joinOutputRecord = KeyValue.pair(
          leftRecord.key, /* by definition, leftRecord.key == rightRecord.key */
          joiner.apply(leftRecord.value, rightRecord.value)
        );
      
      TransformationDescription
      Inner Join
      • (KStream, GlobalKTable) -> KStream

      | Performs an INNER JOIN of this stream with the global table, effectively doing a table lookup. (details) The GlobalKTable is fully bootstrapped upon (re)start of a KafkaStreams instance, which means the table is fully populated with all the data in the underlying topic that is available at the time of the startup. The actual data processing begins only once the bootstrapping has completed. Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.

      KStream<String, Long> left = ...;
      GlobalKTable<Integer, Double> right = ...;
      
      KStream<String, String> joined = left.join(right,
          (leftKey, leftValue) -> leftKey.length(), /* derive a (potentially) new key by which to lookup against the table */
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
        );
      

      Detailed behavior:

      • The join is indirectly key-based , i.e. with the join predicate KeyValueMapper#apply(leftRecord.key, leftRecord.value) == rightRecord.key.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Only input records for the left side (stream) trigger the join. Input records for the right side (table) update only the internal right-side join state.
      * Input records for the stream with a `null` key or a `null` value are ignored and do not trigger the join.
      * Input records for the table with a `null` value are interpreted as _tombstones_ , which indicate the deletion of a record key from the table. Tombstones do not trigger the join.
      

      Left Join

      • (KStream, GlobalKTable) -> KStream

      | Performs a LEFT JOIN of this stream with the global table, effectively doing a table lookup. (details) The GlobalKTable is fully bootstrapped upon (re)start of a KafkaStreams instance, which means the table is fully populated with all the data in the underlying topic that is available at the time of the startup. The actual data processing begins only once the bootstrapping has completed. Causes data re-partitioning of the stream if and only if the stream was marked for re-partitioning.

      KStream<String, Long> left = ...;
      GlobalKTable<Integer, Double> right = ...;
      
      KStream<String, String> joined = left.leftJoin(right,
          (leftKey, leftValue) -> leftKey.length(), /* derive a (potentially) new key by which to lookup against the table */
          (leftValue, rightValue) -> "left=" + leftValue + ", right=" + rightValue /* ValueJoiner */
        );
      

      Detailed behavior:

      • The join is indirectly key-based , i.e. with the join predicate KeyValueMapper#apply(leftRecord.key, leftRecord.value) == rightRecord.key.
      • The join will be triggered under the conditions listed below whenever new input is received. When it is triggered, the user-supplied ValueJoiner will be called to produce join output records.
      * Only input records for the left side (stream) trigger the join. Input records for the right side (table) update only the internal right-side join state.
      * Input records for the stream with a `null` value are ignored and do not trigger the join.
      * Input records for the table with a `null` value are interpreted as _tombstones_ , which indicate the deletion of a record key from the table. Tombstones do not trigger the join.
      
      • For each input record on the left side that does not have any match on the right side, the ValueJoiner will be called with ValueJoiner#apply(leftRecord.value, null).

      Semantics of stream-global-table joins: The join semantics are different to KStream-KTable joins because it’s not a temporal join. Another difference is that, for KStream-GlobalKTable joins, the left input record is first “mapped” with a user-supplied KeyValueMapper into the table’s keyspace prior to the table lookup.

      Windowing

      Windowing lets you control how to group records that have the same key for stateful operations such as aggregations or joins into so-called windows. Windows are tracked per record key.

      Note

      A related operation is grouping, which groups all records that have the same key to ensure that data is properly partitioned (“keyed”) for subsequent operations. Once grouped, windowing allows you to further sub-group the records of a key.

      For example, in join operations, a windowing state store is used to store all the records received so far within the defined window boundary. In aggregating operations, a windowing state store is used to store the latest aggregation results per window. Old records in the state store are purged after the specified window retention period. Kafka Streams guarantees to keep a window for at least this specified time; the default value is one day and can be changed via Materialized#withRetention().

      The DSL supports the following types of windows:

      Window nameBehaviorShort description
      Hopping time windowTime-basedFixed-size, overlapping windows
      Tumbling time windowTime-basedFixed-size, non-overlapping, gap-less windows
      Sliding time windowTime-basedFixed-size, overlapping windows that work on differences between record timestamps
      Session windowSession-basedDynamically-sized, non-overlapping, data-driven windows

      Hopping time windows

      Hopping time windows are windows based on time intervals. They model fixed-sized, (possibly) overlapping windows. A hopping window is defined by two properties: the window’s size and its advance interval (aka “hop”). The advance interval specifies by how much a window moves forward relative to the previous one. For example, you can configure a hopping window with a size 5 minutes and an advance interval of 1 minute. Since hopping windows can overlap - and in general they do - a data record may belong to more than one such windows.

      Note

      Hopping windows vs. sliding windows: Hopping windows are sometimes called “sliding windows” in other stream processing tools. Kafka Streams follows the terminology in academic literature, where the semantics of sliding windows are different to those of hopping windows.

      The following code defines a hopping window with a size of 5 minutes and an advance interval of 1 minute:

      import java.time.Duration;
      import org.apache.kafka.streams.kstream.TimeWindows;
      
      // A hopping time window with a size of 5 minutes and an advance interval of 1 minute.
      // The window's name -- the string parameter -- is used to e.g. name the backing state store.
      Duration windowSize = Duration.ofMinutes(5);
      Duration advance = Duration.ofMinutes(1);
      TimeWindows.ofSizeWithNoGrace(windowSize).advanceBy(advance);
      

      This diagram shows windowing a stream of data records with hopping windows. In this diagram the time numbers represent minutes; e.g. t=5 means “at the five-minute mark”. In reality, the unit of time in Kafka Streams is milliseconds, which means the time numbers would need to be multiplied with 60 * 1,000 to convert from minutes to milliseconds (e.g. t=5 would become t=300,000).

      Hopping time windows are aligned to the epoch , with the lower interval bound being inclusive and the upper bound being exclusive. “Aligned to the epoch” means that the first window starts at timestamp zero. For example, hopping windows with a size of 5000ms and an advance interval (“hop”) of 3000ms have predictable window boundaries [0;5000),[3000;8000),... – and not [1000;6000),[4000;9000),... or even something “random” like [1452;6452),[4452;9452),....

      Unlike non-windowed aggregates that we have seen previously, windowed aggregates return a windowed KTable whose keys type is Windowed<K>. This is to differentiate aggregate values with the same key from different windows. The corresponding window instance and the embedded key can be retrieved as Windowed#window() and Windowed#key(), respectively.

      Tumbling time windows

      Tumbling time windows are a special case of hopping time windows and, like the latter, are windows based on time intervals. They model fixed-size, non-overlapping, gap-less windows. A tumbling window is defined by a single property: the window’s size. A tumbling window is a hopping window whose window size is equal to its advance interval. Since tumbling windows never overlap, a data record will belong to one and only one window.

      This diagram shows windowing a stream of data records with tumbling windows. Windows do not overlap because, by definition, the advance interval is identical to the window size. In this diagram the time numbers represent minutes; e.g. t=5 means “at the five-minute mark”. In reality, the unit of time in Kafka Streams is milliseconds, which means the time numbers would need to be multiplied with 60 * 1,000 to convert from minutes to milliseconds (e.g. t=5 would become t=300,000).

      Tumbling time windows are aligned to the epoch , with the lower interval bound being inclusive and the upper bound being exclusive. “Aligned to the epoch” means that the first window starts at timestamp zero. For example, tumbling windows with a size of 5000ms have predictable window boundaries [0;5000),[5000;10000),... – and not [1000;6000),[6000;11000),... or even something “random” like [1452;6452),[6452;11452),....

      The following code defines a tumbling window with a size of 5 minutes:

      import java.time.Duration;
      import org.apache.kafka.streams.kstream.TimeWindows;
      
      // A tumbling time window with a size of 5 minutes (and, by definition, an implicit
      // advance interval of 5 minutes), and grace period of 1 minute.
      Duration windowSize = Duration.ofMinutes(5);
      Duration gracePeriod = Duration.ofMinutes(1);
      TimeWindows.ofSizeAndGrace(windowSize, gracePeriod);
      
      // The above is equivalent to the following code:
      TimeWindows.ofSizeAndGrace(windowSize, gracePeriod).advanceBy(windowSize);
      

      Sliding time windows

      Sliding windows are actually quite different from hopping and tumbling windows. In Kafka Streams, sliding windows are used for join operations, specified by using the JoinWindows class, and windowed aggregations, specified by using the SlidingWindows class.

      A sliding window models a fixed-size window that slides continuously over the time axis. In this model, two data records are said to be included in the same window if (in the case of symmetric windows) the difference of their timestamps is within the window size. As a sliding window moves along the time axis, records may fall into multiple snapshots of the sliding window, but each unique combination of records appears only in one sliding window snapshot.

      The following code defines a sliding window with a time difference of 10 minutes and a grace period of 30 minutes:

      import org.apache.kafka.streams.kstream.SlidingWindows;
      
      // A sliding time window with a time difference of 10 minutes and grace period of 30 minutes
      Duration timeDifference = Duration.ofMinutes(10);
      Duration gracePeriod = Duration.ofMinutes(30);
      SlidingWindows.ofTimeDifferenceAndGrace(timeDifference, gracePeriod);
      

      This diagram shows windowing a stream of data records with sliding windows. The overlap of the sliding window snapshots varies depending on the record times. In this diagram, the time numbers represent milliseconds. For example, t=5 means “at the five millisecond mark”.

      Sliding windows are aligned to the data record timestamps, not to the epoch. In contrast to hopping and tumbling windows, the lower and upper window time interval bounds of sliding windows are both inclusive.

      Session Windows

      Session windows are used to aggregate key-based events into so-called sessions , the process of which is referred to as sessionization. Sessions represent a period of activity separated by a defined gap of inactivity (or “idleness”). Any events processed that fall within the inactivity gap of any existing sessions are merged into the existing sessions. If an event falls outside of the session gap, then a new session will be created.

      Session windows are different from the other window types in that:

      • all windows are tracked independently across keys - e.g. windows of different keys typically have different start and end times
      • their window sizes sizes vary - even windows for the same key typically have different sizes

      The prime area of application for session windows is user behavior analysis. Session-based analyses can range from simple metrics (e.g. count of user visits on a news website or social platform) to more complex metrics (e.g. customer conversion funnel and event flows).

      The following code defines a session window with an inactivity gap of 5 minutes:

      import java.time.Duration;
      import org.apache.kafka.streams.kstream.SessionWindows;
      
      // A session window with an inactivity gap of 5 minutes.
      SessionWindows.ofInactivityGapWithNoGrace(Duration.ofMinutes(5));
      

      Given the previous session window example, here’s what would happen on an input stream of six records. When the first three records arrive (upper part of in the diagram below), we’d have three sessions (see lower part) after having processed those records: two for the green record key, with one session starting and ending at the 0-minute mark (only due to the illustration it looks as if the session goes from 0 to 1), and another starting and ending at the 6-minute mark; and one session for the blue record key, starting and ending at the 2-minute mark.

      Detected sessions after having received three input records: two records for the green record key at t=0 and t=6, and one record for the blue record key at t=2. In this diagram the time numbers represent minutes; e.g. t=5 means “at the five-minute mark”. In reality, the unit of time in Kafka Streams is milliseconds, which means the time numbers would need to be multiplied with 60 * 1,000 to convert from minutes to milliseconds (e.g. t=5 would become t=300,000).

      If we then receive three additional records (including two out-of-order records), what would happen is that the two existing sessions for the green record key will be merged into a single session starting at time 0 and ending at time 6, consisting of a total of three records. The existing session for the blue record key will be extended to end at time 5, consisting of a total of two records. And, finally, there will be a new session for the blue key starting and ending at time 11.

      Detected sessions after having received six input records. Note the two out-of-order data records at t=4 (green) and t=5 (blue), which lead to a merge of sessions and an extension of a session, respectively.

      Window Final Results

      In Kafka Streams, windowed computations update their results continuously. As new data arrives for a window, freshly computed results are emitted downstream. For many applications, this is ideal, since fresh results are always available. and Kafka Streams is designed to make programming continuous computations seamless. However, some applications need to take action only on the final result of a windowed computation. Common examples of this are sending alerts or delivering results to a system that doesn’t support updates.

      Suppose that you have an hourly windowed count of events per user. If you want to send an alert when a user has less than three events in an hour, you have a real challenge. All users would match this condition at first, until they accrue enough events, so you cannot simply send an alert when someone matches the condition; you have to wait until you know you won’t see any more events for a particular window and then send the alert.

      Kafka Streams offers a clean way to define this logic: after defining your windowed computation, you can suppress the intermediate results, emitting the final count for each user when the window is closed.

      For example:

      KGroupedStream<UserId, Event> grouped = ...;
      grouped
          .windowedBy(TimeWindows.ofSizeAndGrace(Duration.ofHours(1), Duration.ofMinutes(10)))
          .count()
          .suppress(Suppressed.untilWindowCloses(unbounded()))
          .filter((windowedUserId, count) -> count < 3)
          .toStream()
          .foreach((windowedUserId, count) -> sendAlert(windowedUserId.window(), windowedUserId.key(), count));
      

      The key parts of this program are:

      ofSizeAndGrace(Duration.ofHours(1), Duration.ofMinutes(10)) The specified grace period of 10 minutes (i.e., the Duration.ofMinutes(10) argument) allows us to bound the lateness of events the window will accept. For example, the 09:00 to 10:00 window will accept out-of-order records until 10:10, at which point, the window is closed. .suppress(Suppressed.untilWindowCloses(...)) This configures the suppression operator to emit nothing for a window until it closes, and then emit the final result. For example, if user U gets 10 events between 09:00 and 10:10, the filter downstream of the suppression will get no events for the windowed key U@09:00-10:00 until 10:10, and then it will get exactly one with the value 10. This is the final result of the windowed count. unbounded() This configures the buffer used for storing events until their windows close. Production code is able to put a cap on the amount of memory to use for the buffer, but this simple example creates a buffer with no upper bound.

      One thing to note is that suppression is just like any other Kafka Streams operator, so you can build a topology with two branches emerging from the count, one suppressed, and one not, or even multiple differently configured suppressions. This allows you to apply suppressions where they are needed and otherwise rely on the default continuous update behavior.

      For more detailed information, see the JavaDoc on the Suppressed config object and KIP-328.

      Applying processors (Processor API integration)

      Beyond the aforementioned stateless and stateful transformations, you may also leverage the Processor API from the DSL. There are a number of scenarios where this may be helpful:

      • Customization: You need to implement special, customized logic that is not or not yet available in the DSL.
      • Combining ease-of-use with full flexibility where it’s needed: Even though you generally prefer to use the expressiveness of the DSL, there are certain steps in your processing that require more flexibility and tinkering than the DSL provides. For example, only the Processor API provides access to a record’s metadata such as its topic, partition, and offset information. However, you don’t want to switch completely to the Processor API just because of that; and
      • Migrating from other tools: You are migrating from other stream processing technologies that provide an imperative API, and migrating some of your legacy code to the Processor API was faster and/or easier than to migrate completely to the DSL right away.

      Operations and concepts

      • KStream#process: Process all records in a stream, one record at a time, by applying a Processor (provided by a given ProcessorSupplier);
      • KStream#processValues: Process all records in a stream, one record at a time, by applying a FixedKeyProcessor (provided by a given FixedKeyProcessorSupplier);
      • Processor: A processor of key-value pair records;
      • ContextualProcessor: An abstract implementation of Processor that manages the ProcessorContext instance.
      • FixedKeyProcessor: A processor of key-value pair records where keys are immutable;
      • ContextualFixedKeyProcessor: An abstract implementation of FixedKeyProcessor that manages the FixedKeyProcessorContext instance.
      • ProcessorSupplier: A processor supplier that can create one or more Processor instances; and
      • FixedKeyProcessorSupplier: A processor supplier that can create one or more FixedKeyProcessor instances.

      Examples

      Follow the examples below to learn how to apply process and processValues to your KStream.

      ExampleOperationState Type
      Categorizing Logs by SeverityprocessStateless
      Replacing Slang in Text MessagesprocessValuesStateless
      Cumulative Discounts for a Loyalty ProgramprocessStateful
      Traffic Radar Monitoring Car CountprocessValuesStateful

      Categorizing Logs by Severity

      • Idea: You have a stream of log messages. Each message contains a severity level (e.g., INFO, WARN, ERROR) in the value. The processor filters messages, routing ERROR messages to a dedicated topic and discarding INFO messages. The rest (WARN) are forwarded to a dedicated topic too.

      • Real-World Context: In a production monitoring system, categorizing logs by severity ensures ERROR logs are sent to a critical incident management system, WARN logs are analyzed for potential risks, and INFO logs are stored for basic reporting purposes.

        public class CategorizingLogsBySeverityExample { private static final String ERROR_LOGS_TOPIC = “error-logs-topic”; private static final String INPUT_LOGS_TOPIC = “input-logs-topic”; private static final String UNKNOWN_LOGS_TOPIC = “unknown-logs-topic”; private static final String WARN_LOGS_TOPIC = “warn-logs-topic”;

        public static void categorizeWithProcess(final StreamsBuilder builder) {
            final KStream<String, String> logStream = builder.stream(INPUT_LOGS_TOPIC);
            logStream.process(LogSeverityProcessor::new)
                    .to((key, value, recordContext) -> {
                        // Determine the target topic dynamically
                        if ("ERROR".equals(key)) return ERROR_LOGS_TOPIC;
                        if ("WARN".equals(key)) return WARN_LOGS_TOPIC;
                        return UNKNOWN_LOGS_TOPIC;
                    });
        }
        
        private static class LogSeverityProcessor extends ContextualProcessor<String, String, String, String> {
            @Override
            public void process(final Record<String, String> record) {
                if (record.value() == null) {
                    return; // Skip null values
                }
        
                // Assume the severity is the first word in the log message
                // For example: "ERROR: Disk not found" -> "ERROR"
                final int colonIndex = record.value().indexOf(':');
                final String severity = colonIndex > 0 ? record.value().substring(0, colonIndex).trim() : "UNKNOWN";
        
                // Route logs based on severity
                switch (severity) {
                    case "ERROR":
                        context().forward(record.withKey(ERROR_LOGS_TOPIC));
                        break;
                    case "WARN":
                        context().forward(record.withKey(WARN_LOGS_TOPIC));
                        break;
                    case "INFO":
                        // INFO logs are ignored
                        break;
                    default:
                        // Forward to an "unknown" topic for logs with unrecognized severities
                        context().forward(record.withKey(UNKNOWN_LOGS_TOPIC));
                }
            }
        }
        

        }

      Replacing Slang in Text Messages

      • Idea: A messaging stream contains user-generated content, and you want to replace slang words with their formal equivalents (e.g., “u” becomes “you”, “brb” becomes “be right back”). The operation only modifies the message value and keeps the key intact.

      • Real-World Context: In customer support chat systems, normalizing text by replacing slang with formal equivalents ensures that automated sentiment analysis tools work accurately and provide reliable insights.

        public class ReplacingSlangTextInMessagesExample { private static final Map<String, String> SLANG_DICTIONARY = Map.of( “u”, “you”, “brb”, “be right back”, “omg”, “oh my god”, “btw”, “by the way” ); private static final String INPUT_MESSAGES_TOPIC = “input-messages-topic”; private static final String OUTPUT_MESSAGES_TOPIC = “output-messages-topic”;

        public static void replaceWithProcessValues(final StreamsBuilder builder) {
            KStream<String, String> messageStream = builder.stream(INPUT_MESSAGES_TOPIC);
            messageStream.processValues(SlangReplacementProcessor::new).to(OUTPUT_MESSAGES_TOPIC);
        }
        
        private static class SlangReplacementProcessor extends ContextualFixedKeyProcessor<String, String, String> {
            @Override
            public void process(final FixedKeyRecord<String, String> record) {
                if (record.value() == null) {
                    return; // Skip null values
                }
        
                // Replace slang words in the message
                final String[] words = record.value().split("\s+");
                for (final String word : words) {
                    String replacedWord = SLANG_DICTIONARY.getOrDefault(word, word);
                    context().forward(record.withValue(replacedWord));
                }
            }
        }
        

        }

      Cumulative Discounts for a Loyalty Program

      • Idea: A stream of purchase events contains user IDs and transaction amounts. Use a state store to accumulate the total spending of each user. When their total crosses a threshold, apply a discount on their next transaction and update their accumulated total.

      • Real-World Context: In a retail loyalty program, tracking cumulative customer spending enables dynamic rewards, such as issuing a discount when a customer’s total purchases exceed a predefined limit.

        public class CumulativeDiscountsForALoyaltyProgramExample { private static final double DISCOUNT_THRESHOLD = 100.0; private static final String CUSTOMER_SPENDING_STORE = “customer-spending-store”; private static final String DISCOUNT_NOTIFICATION_MESSAGE = “Discount applied! You have received a reward for your purchases.”; private static final String DISCOUNT_NOTIFICATIONS_TOPIC = “discount-notifications-topic”; private static final String PURCHASE_EVENTS_TOPIC = “purchase-events-topic”;

        public static void applyDiscountWithProcess(final StreamsBuilder builder) {
            // Define the state store for tracking cumulative spending
            builder.addStateStore(
                    Stores.keyValueStoreBuilder(
                            Stores.inMemoryKeyValueStore(CUSTOMER_SPENDING_STORE),
                            Serdes.String(),
                            Serdes.Double()
                    )
            );
            final KStream<String, Double> purchaseStream = builder.stream(PURCHASE_EVENTS_TOPIC);
            // Apply the Processor with the state store
            final KStream<String, String> notificationStream =
                    purchaseStream.process(CumulativeDiscountProcessor::new, CUSTOMER_SPENDING_STORE);
            // Send the notifications to the output topic
            notificationStream.to(DISCOUNT_NOTIFICATIONS_TOPIC);
        }
        
        private static class CumulativeDiscountProcessor implements Processor<String, Double, String, String> {
            private KeyValueStore<String, Double> spendingStore;
            private ProcessorContext<String, String> context;
        
            @Override
            public void init(final ProcessorContext<String, String> context) {
                this.context = context;
                // Retrieve the state store for cumulative spending
                spendingStore = context.getStateStore(CUSTOMER_SPENDING_STORE);
            }
        
            @Override
            public void process(final Record<String, Double> record) {
                if (record.value() == null) {
                    return; // Skip null purchase amounts
                }
        
                // Get the current spending total for the customer
                Double currentSpending = spendingStore.get(record.key());
                if (currentSpending == null) {
                    currentSpending = 0.0;
                }
                // Update the cumulative spending
                currentSpending += record.value();
                spendingStore.put(record.key(), currentSpending);
        
                // Check if the customer qualifies for a discount
                if (currentSpending >= DISCOUNT_THRESHOLD) {
                    // Reset the spending after applying the discount
                    spendingStore.put(record.key(), currentSpending - DISCOUNT_THRESHOLD);
                    // Send a discount notification
                    context.forward(record.withValue(DISCOUNT_NOTIFICATION_MESSAGE));
                }
            }
        }
        

        }

      Traffic Radar Monitoring Car Count

      • Idea: A radar monitors cars passing along a road stretch. A system counts the cars for each day, maintaining a cumulative total for the current day in a state store. At the end of the day, the count is emitted and the state is cleared for the next day.

      • Real-World Context: A car counting system can be useful for determining measures for widening or controlling traffic depending on the number of cars passing through the monitored stretch.

        public class TrafficRadarMonitoringCarCountExample { private static final String DAILY_COUNT_STORE = “price-state-store”; private static final String DAILY_COUNT_TOPIC = “price-state-topic”; private static final String RADAR_COUNT_TOPIC = “car-radar-topic”;

        public static void countWithProcessValues(final StreamsBuilder builder) {
            // Define a state store for tracking daily car counts
            builder.addStateStore(
                    Stores.keyValueStoreBuilder(
                            Stores.inMemoryKeyValueStore(DAILY_COUNT_STORE),
                            Serdes.String(),
                            Serdes.Long()
                    )
            );
            final KStream<Void, String> radarStream = builder.stream(RADAR_COUNT_TOPIC);
            // Apply the FixedKeyProcessor with the state store
            radarStream.processValues(DailyCarCountProcessor::new, DAILY_COUNT_STORE)
                    .to(DAILY_COUNT_TOPIC);
        }
        
        private static class DailyCarCountProcessor implements FixedKeyProcessor<Void, String, String> {
            private FixedKeyProcessorContext<Void, String> context;
            private KeyValueStore<String, Long> stateStore;
            private static final DateTimeFormatter DATE_FORMATTER =
                    DateTimeFormatter.ofPattern("yyyy-MM-dd").withZone(ZoneId.systemDefault());
        
            @Override
            public void init(final FixedKeyProcessorContext<Void, String> context) {
                this.context = context;
                stateStore = context.getStateStore(DAILY_COUNT_STORE);
            }
        
            @Override
            public void process(final FixedKeyRecord<Void, String> record) {
                if (record.value() == null) {
                    return; // Skip null events
                }
        
                // Derive the current day from the event timestamp
                final long timestamp = System.currentTimeMillis(); // Use system time for simplicity
                final String currentDay = DATE_FORMATTER.format(Instant.ofEpochMilli(timestamp));
                // Retrieve the current count for the day
                Long dailyCount = stateStore.get(currentDay);
                if (dailyCount == null) {
                    dailyCount = 0L;
                }
                // Increment the count
                dailyCount++;
                stateStore.put(currentDay, dailyCount);
        
                // Emit the current day's count
                context.forward(record.withValue(String.format("Day: %s, Car Count: %s", currentDay, dailyCount)));
            }
        }
        

        }

      Keynotes

      • Type Safety and Flexibility: The process and processValues APIs utilize ProcessorContext and Record or FixedKeyRecord objects for better type safety and flexibility of custom processing logic.
      • Clear State and Logic Management: Implementations for Processor or FixedKeyProcessor should manage state and logic clearly. Use context().forward() for emitting records downstream.
      • Unified API: Consolidates multiple methods into a single, versatile API.
      • Future-Proof: Ensures compatibility with the latest Kafka Streams releases.

      Transformers removal and migration to processors

      As of Kafka 4.0, several deprecated methods in the Kafka Streams API, such as transform, flatTransform, transformValues, flatTransformValues, and process have been removed. These methods have been replaced with the more versatile Processor API. This guide provides detailed steps for migrating existing code to use the new Processor API and explains the benefits of the changes.

      The following deprecated methods are no longer available in Kafka Streams:

      • KStream#transform
      • KStream#flatTransform
      • KStream#transformValues
      • KStream#flatTransformValues
      • KStream#process

      The Processor API now serves as a unified replacement for all these methods. It simplifies the API surface while maintaining support for both stateless and stateful operations.

      Migration Examples

      To migrate from the deprecated transform, transformValues, flatTransform, and flatTransformValues methods to the Processor API (PAPI) in Kafka Streams, let’s resume the previouss examples. The new process and processValues methods enable a more flexible and reusable approach by requiring implementations of the Processor or FixedKeyProcessor interfaces.

      ExampleMigrating fromMigrating toState Type
      Categorizing Logs by SeverityflatTransformprocessStateless
      Replacing Slang in Text MessagesflatTransformValuesprocessValuesStateless
      Cumulative Discounts for a Loyalty ProgramtransformprocessStateful
      Traffic Radar Monitoring Car CounttransformValuesprocessValuesStateful

      Categorizing Logs by Severity

      Below, methods categorizeWithFlatTransform and categorizeWithProcess show how you can migrate from flatTransform to process.

      public class CategorizingLogsBySeverityExample {
          private static final String ERROR_LOGS_TOPIC = "error-logs-topic";
          private static final String INPUT_LOGS_TOPIC = "input-logs-topic";
          private static final String UNKNOWN_LOGS_TOPIC = "unknown-logs-topic";
          private static final String WARN_LOGS_TOPIC = "warn-logs-topic";
      
          public static void categorizeWithFlatTransform(final StreamsBuilder builder) {
              final KStream<String, String> logStream = builder.stream(INPUT_LOGS_TOPIC);
              logStream.flatTransform(LogSeverityTransformer::new)
                      .to((key, value, recordContext) -> {
                          // Determine the target topic dynamically
                          if ("ERROR".equals(key)) return ERROR_LOGS_TOPIC;
                          if ("WARN".equals(key)) return WARN_LOGS_TOPIC;
                          return UNKNOWN_LOGS_TOPIC;
                      });
          }
      
          public static void categorizeWithProcess(final StreamsBuilder builder) {
              final KStream<String, String> logStream = builder.stream(INPUT_LOGS_TOPIC);
              logStream.process(LogSeverityProcessor::new)
                      .to((key, value, recordContext) -> {
                          // Determine the target topic dynamically
                          if ("ERROR".equals(key)) return ERROR_LOGS_TOPIC;
                          if ("WARN".equals(key)) return WARN_LOGS_TOPIC;
                          return UNKNOWN_LOGS_TOPIC;
                      });
          }
      
          private static class LogSeverityTransformer implements Transformer<String, String, Iterable<KeyValue<String, String>>> {
              @Override
              public void init(org.apache.kafka.streams.processor.ProcessorContext context) {
              }
      
              @Override
              public Iterable<KeyValue<String, String>> transform(String key, String value) {
                  if (value == null) {
                      return Collections.emptyList(); // Skip null values
                  }
      
                  // Assume the severity is the first word in the log message
                  // For example: "ERROR: Disk not found" -> "ERROR"
                  int colonIndex = value.indexOf(':');
                  String severity = colonIndex > 0 ? value.substring(0, colonIndex).trim() : "UNKNOWN";
      
                  // Create appropriate KeyValue pair based on severity
                  return switch (severity) {
                      case "ERROR" -> List.of(new KeyValue<>("ERROR", value));
                      case "WARN" -> List.of(new KeyValue<>("WARN", value));
                      case "INFO" -> Collections.emptyList(); // INFO logs are ignored
                      default -> List.of(new KeyValue<>("UNKNOWN", value));
                  };
              }
      
              @Override
              public void close() {
              }
          }
      
          private static class LogSeverityProcessor extends ContextualProcessor<String, String, String, String> {
              @Override
              public void process(final Record<String, String> record) {
                  if (record.value() == null) {
                      return; // Skip null values
                  }
      
                  // Assume the severity is the first word in the log message
                  // For example: "ERROR: Disk not found" -> "ERROR"
                  final int colonIndex = record.value().indexOf(':');
                  final String severity = colonIndex > 0 ? record.value().substring(0, colonIndex).trim() : "UNKNOWN";
      
                  // Route logs based on severity
                  switch (severity) {
                      case "ERROR":
                          context().forward(record.withKey(ERROR_LOGS_TOPIC));
                          break;
                      case "WARN":
                          context().forward(record.withKey(WARN_LOGS_TOPIC));
                          break;
                      case "INFO":
                          // INFO logs are ignored
                          break;
                      default:
                          // Forward to an "unknown" topic for logs with unrecognized severities
                          context().forward(record.withKey(UNKNOWN_LOGS_TOPIC));
                  }
              }
          }
      }
      

      Replacing Slang in Text Messages

      Below, methods replaceWithFlatTransformValues and replaceWithProcessValues show how you can migrate from flatTransformValues to processValues.

      public class ReplacingSlangTextInMessagesExample {
          private static final Map<String, String> SLANG_DICTIONARY = Map.of(
                  "u", "you",
                  "brb", "be right back",
                  "omg", "oh my god",
                  "btw", "by the way"
          );
          private static final String INPUT_MESSAGES_TOPIC = "input-messages-topic";
          private static final String OUTPUT_MESSAGES_TOPIC = "output-messages-topic";
      
          public static void replaceWithFlatTransformValues(final StreamsBuilder builder) {
              KStream<String, String> messageStream = builder.stream(INPUT_MESSAGES_TOPIC);
              messageStream.flatTransformValues(SlangReplacementTransformer::new).to(OUTPUT_MESSAGES_TOPIC);
          }
      
          public static void replaceWithProcessValues(final StreamsBuilder builder) {
              KStream<String, String> messageStream = builder.stream(INPUT_MESSAGES_TOPIC);
              messageStream.processValues(SlangReplacementProcessor::new).to(OUTPUT_MESSAGES_TOPIC);
          }
      
          private static class SlangReplacementTransformer implements ValueTransformer<String, Iterable<String>> {
      
              @Override
              public void init(final org.apache.kafka.streams.processor.ProcessorContext context) {
              }
      
              @Override
              public Iterable<String> transform(final String value) {
                  if (value == null) {
                      return Collections.emptyList(); // Skip null values
                  }
      
                  // Replace slang words in the message
                  final String[] words = value.split("\s+");
                  return Arrays.asList(
                          Arrays.stream(words)
                                  .map(word -> SLANG_DICTIONARY.getOrDefault(word, word))
                                  .toArray(String[]::new)
                  );
              }
      
              @Override
              public void close() {
              }
          }
      
          private static class SlangReplacementProcessor extends ContextualFixedKeyProcessor<String, String, String> {
              @Override
              public void process(final FixedKeyRecord<String, String> record) {
                  if (record.value() == null) {
                      return; // Skip null values
                  }
      
                  // Replace slang words in the message
                  final String[] words = record.value().split("\s+");
                  for (final String word : words) {
                      String replacedWord = SLANG_DICTIONARY.getOrDefault(word, word);
                      context().forward(record.withValue(replacedWord));
                  }
              }
          }
      }
      

      Cumulative Discounts for a Loyalty Program

      public class CumulativeDiscountsForALoyaltyProgramExample {
          private static final double DISCOUNT_THRESHOLD = 100.0;
          private static final String CUSTOMER_SPENDING_STORE = "customer-spending-store";
          private static final String DISCOUNT_NOTIFICATION_MESSAGE =
                  "Discount applied! You have received a reward for your purchases.";
          private static final String DISCOUNT_NOTIFICATIONS_TOPIC = "discount-notifications-topic";
          private static final String PURCHASE_EVENTS_TOPIC = "purchase-events-topic";
      
          public static void applyDiscountWithTransform(final StreamsBuilder builder) {
              // Define the state store for tracking cumulative spending
              builder.addStateStore(
                      Stores.keyValueStoreBuilder(
                              Stores.inMemoryKeyValueStore(CUSTOMER_SPENDING_STORE),
                              Serdes.String(),
                              Serdes.Double()
                      )
              );
              final KStream<String, Double> purchaseStream = builder.stream(PURCHASE_EVENTS_TOPIC);
              // Apply the Transformer with the state store
              final KStream<String, String> notificationStream =
                      purchaseStream.transform(CumulativeDiscountTransformer::new, CUSTOMER_SPENDING_STORE);
              // Send the notifications to the output topic
              notificationStream.to(DISCOUNT_NOTIFICATIONS_TOPIC);
          }
      
          public static void applyDiscountWithProcess(final StreamsBuilder builder) {
              // Define the state store for tracking cumulative spending
              builder.addStateStore(
                      Stores.keyValueStoreBuilder(
                              Stores.inMemoryKeyValueStore(CUSTOMER_SPENDING_STORE),
                              org.apache.kafka.common.serialization.Serdes.String(),
                              org.apache.kafka.common.serialization.Serdes.Double()
                      )
              );
              final KStream<String, Double> purchaseStream = builder.stream(PURCHASE_EVENTS_TOPIC);
              // Apply the Processor with the state store
              final KStream<String, String> notificationStream =
                      purchaseStream.process(CumulativeDiscountProcessor::new, CUSTOMER_SPENDING_STORE);
              // Send the notifications to the output topic
              notificationStream.to(DISCOUNT_NOTIFICATIONS_TOPIC);
          }
      
          private static class CumulativeDiscountTransformer implements Transformer<String, Double, KeyValue<String, String>> {
              private KeyValueStore<String, Double> spendingStore;
      
              @Override
              public void init(final org.apache.kafka.streams.processor.ProcessorContext context) {
                  // Retrieve the state store for cumulative spending
                  spendingStore = context.getStateStore(CUSTOMER_SPENDING_STORE);
              }
      
              @Override
              public KeyValue<String, String> transform(final String key, final Double value) {
                  if (value == null) {
                      return null; // Skip null purchase amounts
                  }
      
                  // Get the current spending total for the customer
                  Double currentSpending = spendingStore.get(key);
                  if (currentSpending == null) {
                      currentSpending = 0.0;
                  }
                  // Update the cumulative spending
                  currentSpending += value;
                  spendingStore.put(key, currentSpending);
      
                  // Check if the customer qualifies for a discount
                  if (currentSpending >= DISCOUNT_THRESHOLD) {
                      // Reset the spending after applying the discount
                      spendingStore.put(key, currentSpending - DISCOUNT_THRESHOLD);
                      // Return a notification message
                      return new KeyValue<>(key, DISCOUNT_NOTIFICATION_MESSAGE);
                  }
                  return null; // No discount, so no output for this record
              }
      
              @Override
              public void close() {
              }
          }
      
          private static class CumulativeDiscountProcessor implements Processor<String, Double, String, String> {
              private KeyValueStore<String, Double> spendingStore;
              private ProcessorContext<String, String> context;
      
              @Override
              public void init(final ProcessorContext<String, String> context) {
                  this.context = context;
                  // Retrieve the state store for cumulative spending
                  spendingStore = context.getStateStore(CUSTOMER_SPENDING_STORE);
              }
      
              @Override
              public void process(final Record<String, Double> record) {
                  if (record.value() == null) {
                      return; // Skip null purchase amounts
                  }
      
                  // Get the current spending total for the customer
                  Double currentSpending = spendingStore.get(record.key());
                  if (currentSpending == null) {
                      currentSpending = 0.0;
                  }
                  // Update the cumulative spending
                  currentSpending += record.value();
                  spendingStore.put(record.key(), currentSpending);
      
                  // Check if the customer qualifies for a discount
                  if (currentSpending >= DISCOUNT_THRESHOLD) {
                      // Reset the spending after applying the discount
                      spendingStore.put(record.key(), currentSpending - DISCOUNT_THRESHOLD);
                      // Send a discount notification
                      context.forward(record.withValue(DISCOUNT_NOTIFICATION_MESSAGE));
                  }
              }
          }
      }
      

      Traffic Radar Monitoring Car Count

      Below, methods countWithTransformValues and countWithProcessValues show how you can migrate from transformValues to processValues.

      public class TrafficRadarMonitoringCarCountExample {
          private static final String DAILY_COUNT_STORE = "price-state-store";
          private static final String DAILY_COUNT_TOPIC = "price-state-topic";
          private static final String RADAR_COUNT_TOPIC = "car-radar-topic";
      
          public static void countWithTransformValues(final StreamsBuilder builder) {
              // Define a state store for tracking daily car counts
              builder.addStateStore(
                      Stores.keyValueStoreBuilder(
                              Stores.inMemoryKeyValueStore(DAILY_COUNT_STORE),
                              org.apache.kafka.common.serialization.Serdes.String(),
                              org.apache.kafka.common.serialization.Serdes.Long()
                      )
              );
              final KStream<Void, String> radarStream = builder.stream(RADAR_COUNT_TOPIC);
              // Apply the ValueTransformer with the state store
              radarStream.transformValues(DailyCarCountTransformer::new, DAILY_COUNT_STORE)
                      .to(DAILY_COUNT_TOPIC);
          }
      
          public static void countWithProcessValues(final StreamsBuilder builder) {
              // Define a state store for tracking daily car counts
              builder.addStateStore(
                      Stores.keyValueStoreBuilder(
                              Stores.inMemoryKeyValueStore(DAILY_COUNT_STORE),
                              org.apache.kafka.common.serialization.Serdes.String(),
                              org.apache.kafka.common.serialization.Serdes.Long()
                      )
              );
              final KStream<Void, String> radarStream = builder.stream(RADAR_COUNT_TOPIC);
              // Apply the FixedKeyProcessor with the state store
              radarStream.processValues(DailyCarCountProcessor::new, DAILY_COUNT_STORE)
                      .to(DAILY_COUNT_TOPIC);
          }
      
          private static class DailyCarCountTransformer implements ValueTransformerWithKey<Void, String, String> {
              private KeyValueStore<String, Long> stateStore;
              private static final DateTimeFormatter DATE_FORMATTER =
                      DateTimeFormatter.ofPattern("yyyy-MM-dd").withZone(ZoneId.systemDefault());
      
              @Override
              public void init(final org.apache.kafka.streams.processor.ProcessorContext context) {
                  // Access the state store
                  stateStore = context.getStateStore(DAILY_COUNT_STORE);
              }
      
              @Override
              public String transform(Void readOnlyKey, String value) {
                  if (value == null) {
                      return null; // Skip null events
                  }
      
                  // Derive the current day from the event timestamp
                  final long timestamp = System.currentTimeMillis(); // Use system time for simplicity
                  final String currentDay = DATE_FORMATTER.format(Instant.ofEpochMilli(timestamp));
                  // Retrieve the current count for the day
                  Long dailyCount = stateStore.get(currentDay);
                  if (dailyCount == null) {
                      dailyCount = 0L;
                  }
                  // Increment the count
                  dailyCount++;
                  stateStore.put(currentDay, dailyCount);
      
                  // Return the current day's count
                  return String.format("Day: %s, Car Count: %s", currentDay, dailyCount);
              }
      
              @Override
              public void close() {
              }
          }
      
          private static class DailyCarCountProcessor implements FixedKeyProcessor<Void, String, String> {
              private FixedKeyProcessorContext<Void, String> context;
              private KeyValueStore<String, Long> stateStore;
              private static final DateTimeFormatter DATE_FORMATTER =
                      DateTimeFormatter.ofPattern("yyyy-MM-dd").withZone(ZoneId.systemDefault());
      
              @Override
              public void init(final FixedKeyProcessorContext<Void, String> context) {
                  this.context = context;
                  stateStore = context.getStateStore(DAILY_COUNT_STORE);
              }
      
              @Override
              public void process(final FixedKeyRecord<Void, String> record) {
                  if (record.value() == null) {
                      return; // Skip null events
                  }
      
                  // Derive the current day from the event timestamp
                  final long timestamp = System.currentTimeMillis(); // Use system time for simplicity
                  final String currentDay = DATE_FORMATTER.format(Instant.ofEpochMilli(timestamp));
                  // Retrieve the current count for the day
                  Long dailyCount = stateStore.get(currentDay);
                  if (dailyCount == null) {
                      dailyCount = 0L;
                  }
                  // Increment the count
                  dailyCount++;
                  stateStore.put(currentDay, dailyCount);
      
                  // Emit the current day's count
                  context.forward(record.withValue(String.format("Day: %s, Car Count: %s", currentDay, dailyCount)));
              }
          }
      }
      

      Keynotes

      • Type Safety and Flexibility: The process and processValues APIs utilize ProcessorContext and Record or FixedKeyRecord objects for better type safety and flexibility of custom processing logic.
      • Clear State and Logic Management: Implementations for Processor or FixedKeyProcessor should manage state and logic clearly. Use context().forward() for emitting records downstream.
      • Unified API: Consolidates multiple methods into a single, versatile API.
      • Future-Proof: Ensures compatibility with the latest Kafka Streams releases.

      Removal of Old process Method

      It is worth mentioning that, in addition to the methods mentioned above, the process method, which integrated the ‘old’ Processor API (i.e., Processor as opposed to the new api.Processor) into the DSL, has also been removed. The following example shows how to migrate to the new process.

      Example

      • Idea: The system monitors page views for a website in real-time. When a page reaches a predefined popularity threshold (e.g., 1000 views), the system automatically sends an email alert to the site administrator or marketing team to notify them of the page’s success. This helps teams quickly identify high-performing content and act on it, such as promoting the page further or analyzing the traffic source.

      • Real-World Context: In a content management system (CMS) for a news or blogging platform, it’s crucial to track the popularity of articles or posts. For example:

        • Marketing Teams: Use the notification to highlight trending content on social media or email newsletters.
        • Operations Teams: Use the alert to ensure the site can handle increased traffic for popular pages.
        • Ad Managers: Identify pages where additional ad placements might maximize revenue.

      By automating the detection of popular pages, the system eliminates the need for manual monitoring and ensures timely actions to capitalize on the content’s performance.

      public class PopularPageEmailAlertExample {
          private static final String ALERTS_EMAIL = "alerts@yourcompany.com";
          private static final String PAGE_VIEWS_TOPIC = "page-views-topic";
      
          public static void alertWithOldProcess(StreamsBuilder builder) {
              KStream<String, Long> pageViews = builder.stream(PAGE_VIEWS_TOPIC);
              // Filter pages with exactly 1000 views and process them using the old API
              pageViews.filter((pageId, viewCount) -> viewCount == 1000)
                      .process(PopularPageEmailAlertOld::new);
          }
      
          public static void alertWithNewProcess(StreamsBuilder builder) {
              KStream<String, Long> pageViews = builder.stream(PAGE_VIEWS_TOPIC);
              // Filter pages with exactly 1000 views and process them using the new API
              pageViews.filter((pageId, viewCount) -> viewCount == 1000)
                      .process(PopularPageEmailAlertNew::new);
          }
      
          private static class PopularPageEmailAlertOld extends AbstractProcessor<String, Long> {
              @Override
              public void init(org.apache.kafka.streams.processor.ProcessorContext context) {
                  super.init(context);
                  System.out.println("Initialized email client for: " + ALERTS_EMAIL);
              }
      
              @Override
              public void process(String key, Long value) {
                  if (value == null) return;
      
                  if (value == 1000) {
                      // Send an email alert
                      System.out.printf("ALERT (Old API): Page %s has reached 1000 views. Sending email to %s%n", key, ALERTS_EMAIL);
                  }
              }
      
              @Override
              public void close() {
                  System.out.println("Tearing down email client for: " + ALERTS_EMAIL);
              }
          }
      
          private static class PopularPageEmailAlertNew implements Processor<String, Long, Void, Void> {
              @Override
              public void init(ProcessorContext<Void, Void> context) {
                  System.out.println("Initialized email client for: " + ALERTS_EMAIL);
              }
      
              @Override
              public void process(Record<String, Long> record) {
                  if (record.value() == null) return;
      
                  if (record.value() == 1000) {
                      // Send an email alert
                      System.out.printf("ALERT (New API): Page %s has reached 1000 views. Sending email to %s%n", record.key(), ALERTS_EMAIL);
                  }
              }
      
              @Override
              public void close() {
                  System.out.println("Tearing down email client for: " + ALERTS_EMAIL);
              }
          }
      }
      

      Naming Operators in a Streams DSL application Kafka Streams allows you to name processors created via the Streams DSL

      Controlling KTable emit rate

      A KTable is logically a continuously updated table. These updates make their way to downstream operators whenever new data is available, ensuring that the whole computation is as fresh as possible. Logically speaking, most programs describe a series of transformations, and the update rate is not a factor in the program behavior. In these cases, the rate of update is more of a performance concern. Operators are able to optimize both the network traffic (to the Kafka brokers) and the disk traffic (to the local state stores) by adjusting commit interval and batch size configurations.

      However, some applications need to take other actions, such as calling out to external systems, and therefore need to exercise some control over the rate of invocations, for example of KStream#foreach.

      Rather than achieving this as a side-effect of the KTable record cache, you can directly impose a rate limit via the KTable#suppress operator.

      For example:

      KGroupedTable<String, String> groupedTable = ...;
      groupedTable
          .count()
          .suppress(untilTimeLimit(ofMinutes(5), maxBytes(1_000_000L).emitEarlyWhenFull()))
          .toStream()
          .foreach((key, count) -> updateCountsDatabase(key, count));
      

      This configuration ensures that updateCountsDatabase gets events for each key no more than once every 5 minutes. Note that the latest state for each key has to be buffered in memory for that 5-minute period. You have the option to control the maximum amount of memory to use for this buffer (in this case, 1MB). There is also an option to impose a limit in terms of number of records (or to leave both limits unspecified).

      Additionally, it is possible to choose what happens if the buffer fills up. This example takes a relaxed approach and just emits the oldest records before their 5-minute time limit to bring the buffer back down to size. Alternatively, you can choose to stop processing and shut the application down. This may seem extreme, but it gives you a guarantee that the 5-minute time limit will be absolutely enforced. After the application shuts down, you could allocate more memory for the buffer and resume processing. Emitting early is preferable for most applications.

      For more detailed information, see the JavaDoc on the Suppressed config object and KIP-328.

      Using timestamp-based semantics for table processors

      By default, tables in Kafka Streams use offset-based semantics. When multiple records arrive for the same key, the one with the largest record offset is considered the latest record for the key, and is the record that appears in aggregation and join results computed on the table. This is true even in the event of out-of-order data. The record with the largest offset is considered to be the latest record for the key, even if this record does not have the largest timestamp.

      An alternative to offset-based semantics is timestamp-based semantics. With timestamp-based semantics, the record with the largest timestamp is considered the latest record, even if there is another record with a larger offset (and smaller timestamp). If there is no out-of-order data (per key), then offset-based semantics and timestamp-based semantics are equivalent; the difference only appears when there is out-of-order data.

      Starting with Kafka Streams 3.5, Kafka Streams supports timestamp-based semantics through the use of versioned state stores. When a table is materialized with a versioned state store, it is a versioned table and will result in different processor semantics in the presence of out-of-order data.

      • When performing a stream-table join, stream-side records will join with the latest-by-timestamp table record which has a timestamp less than or equal to the stream record’s timestamp. This is in contrast to joining a stream to an unversioned table, in which case the latest-by-offset table record will be joined, even if the stream-side record is out-of-order and has a lower timestamp.
      • Aggregations computed on the table will include the latest-by-timestamp record for each key, instead of the latest-by-offset record. Out-of-order updates (per key) will not trigger a new aggregation result. This is true for count and reduce operations as well, in addition to aggregate operations.
      • Table joins will use the latest-by-timestamp record for each key, instead of the latest-by-offset record. Out-of-order updates (per key) will not trigger a new join result. This is true for both primary-key table-table joins and also foreign-key table-table joins. If a versioned table is joined with an unversioned table, the result will be the join of the latest-by-timestamp record from the versioned table with the latest-by-offset record from the unversioned table.
      • Table filter operations will no longer suppress consecutive tombstones, so users may observe more null records downstream of the filter than compared to when filtering an unversioned table. This is done in order to preserve a complete version history downstream, in the event of out-of-order data.
      • suppress operations are not allowed on versioned tables, as this would collapse the version history and lead to undefined behavior.

      Once a table is materialized with a versioned store, downstream tables are also considered versioned until any of the following occurs:

      • A downstream table is explicitly materialized, either with an unversioned store supplier or with no store supplier (all stores are unversioned by default, including the default store supplier)
      • Any stateful transformation occurs, including aggregations and joins
      • A table is converted to a stream and back.

      The results of certain processors should not be materialized with versioned stores, as these processors do not produce a complete older version history, and therefore materialization as a versioned table would lead to unpredictable results:

      • Aggregate processors, for both table and stream aggregations. This includes aggregate, count and reduce operations.
      • Table-table join processors, including both primary-key and foreign-key joins.

      For more on versioned stores and how to start using them in your application, see here.

      Writing streams back to Kafka

      Any streams and tables may be (continuously) written back to a Kafka topic. As we will describe in more detail below, the output data might be re-partitioned on its way to Kafka, depending on the situation.

      Writing to KafkaDescription
      To
      • KStream -> void

      | Terminal operation. Write the records to Kafka topic(s). (KStream details) When to provide serdes explicitly:

      • If you do not specify Serdes explicitly, the default Serdes from the configuration are used.
      • You must specify Serdes explicitly via the Produced class if the key and/or value types of the KStream do not match the configured default Serdes.
      • See Data Types and Serialization for information about configuring default Serdes, available Serdes, and implementing your own custom Serdes.

      A variant of to exists that enables you to specify how the data is produced by using a Produced instance to specify, for example, a StreamPartitioner that gives you control over how output records are distributed across the partitions of the output topic. Another variant of to exists that enables you to dynamically choose which topic to send to for each record via a TopicNameExtractor instance.

      KStream<String, Long> stream = ...;
      
      // Write the stream to the output topic, using the configured default key
      // and value serdes.
      stream.to("my-stream-output-topic");
      
      // Write the stream to the output topic, using explicit key and value serdes,
      // (thus overriding the defaults in the config properties).
      stream.to("my-stream-output-topic", Produced.with(Serdes.String(), Serdes.Long());
      

      Causes data re-partitioning if any of the following conditions is true:

      1. If the output topic has a different number of partitions than the stream/table.
      2. If the KStream was marked for re-partitioning.
      3. If you provide a custom StreamPartitioner to explicitly control how to distribute the output records across the partitions of the output topic.
      4. If the key of an output record is null.

      Note

      When you want to write to systems other than Kafka: Besides writing the data back to Kafka, you can also apply a custom processor as a stream sink at the end of the processing to, for example, write to external databases. First, doing so is not a recommended pattern - we strongly suggest to use the Kafka Connect API instead. However, if you do use such a sink processor, please be aware that it is now your responsibility to guarantee message delivery semantics when talking to such external systems (e.g., to retry on delivery failure or to prevent message duplication).

      Testing a Streams application

      Kafka Streams comes with a test-utils module to help you test your application here.

      Kafka Streams DSL for Scala

      The Kafka Streams DSL Java APIs are based on the Builder design pattern, which allows users to incrementally build the target functionality using lower level compositional fluent APIs. These APIs can be called from Scala, but there are several issues:

      1. Additional type annotations - The Java APIs use Java generics in a way that are not fully compatible with the type inferencer of the Scala compiler. Hence the user has to add type annotations to the Scala code, which seems rather non-idiomatic in Scala.
      2. Verbosity - In some cases the Java APIs appear too verbose compared to idiomatic Scala.
      3. Type Unsafety - The Java APIs offer some options where the compile time type safety is sometimes subverted and can result in runtime errors. This stems from the fact that the Serdes defined as part of config are not type checked during compile time. Hence any missing Serdes can result in runtime errors.

      The Kafka Streams DSL for Scala library is a wrapper over the existing Java APIs for Kafka Streams DSL that addresses the concerns raised above. It does not attempt to provide idiomatic Scala APIs that one would implement in a Scala library developed from scratch. The intention is to make the Java APIs more usable in Scala through better type inferencing, enhanced expressiveness, and lesser boilerplates.

      The library wraps Java Stream DSL APIs in Scala thereby providing:

      1. Better type inference in Scala.
      2. Less boilerplate in application code.
      3. The usual builder-style composition that developers get with the original Java API.
      4. Implicit serializers and de-serializers leading to better abstraction and less verbosity.
      5. Better type safety during compile time.

      All functionality provided by Kafka Streams DSL for Scala are under the root package name of org.apache.kafka.streams.scala.

      Many of the public facing types from the Java API are wrapped. The following Scala abstractions are available to the user:

      • org.apache.kafka.streams.scala.StreamsBuilder
      • org.apache.kafka.streams.scala.kstream.KStream
      • org.apache.kafka.streams.scala.kstream.KTable
      • org.apache.kafka.streams.scala.kstream.KGroupedStream
      • org.apache.kafka.streams.scala.kstream.KGroupedTable
      • org.apache.kafka.streams.scala.kstream.SessionWindowedKStream
      • org.apache.kafka.streams.scala.kstream.TimeWindowedKStream

      The library also has several utility abstractions and modules that the user needs to use for proper semantics.

      • org.apache.kafka.streams.scala.ImplicitConversions: Module that brings into scope the implicit conversions between the Scala and Java classes.
      • org.apache.kafka.streams.scala.serialization.Serdes: Module that contains all primitive Serdes that can be imported as implicits and a helper to create custom Serdes.

      The library is cross-built with Scala 2.12 and 2.13. To reference the library compiled against Scala 2.13 include the following in your maven pom.xml add the following:

      <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-streams-scala_2.13</artifactId>
        <version>4.0.0</version>
      </dependency>
      

      To use the library compiled against Scala 2.12 replace the artifactId with kafka-streams-scala_2.12.

      When using SBT then you can reference the correct library using the following:

      libraryDependencies += "org.apache.kafka" %% "kafka-streams-scala" % "4.0.0"
      

      Sample Usage

      The library works by wrapping the original Java abstractions of Kafka Streams within a Scala wrapper object and then using implicit conversions between them. All the Scala abstractions are named identically as the corresponding Java abstraction, but they reside in a different package of the library e.g. the Scala class org.apache.kafka.streams.scala.StreamsBuilder is a wrapper around org.apache.kafka.streams.StreamsBuilder, org.apache.kafka.streams.scala.kstream.KStream is a wrapper around org.apache.kafka.streams.kstream.KStream, and so on.

      Here’s an example of the classic WordCount program that uses the Scala StreamsBuilder that builds an instance of KStream which is a wrapper around Java KStream. Then we reify to a table and get a KTable, which, again is a wrapper around Java KTable.

      The net result is that the following code is structured just like using the Java API, but with Scala and with far fewer type annotations compared to using the Java API directly from Scala. The difference in type annotation usage is more obvious when given an example. Below is an example WordCount implementation that will be used to demonstrate the differences between the Scala and Java API.

      import java.time.Duration
      import java.util.Properties
      
      import org.apache.kafka.streams.kstream.Materialized
      import org.apache.kafka.streams.scala.ImplicitConversions._
      import org.apache.kafka.streams.scala._
      import org.apache.kafka.streams.scala.kstream._
      import org.apache.kafka.streams.{KafkaStreams, StreamsConfig}
      
      object WordCountApplication extends App {
        import Serdes._
      
        val props: Properties = {
          val p = new Properties()
          p.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application")
          p.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-broker1:9092")
          p
        }
      
        val builder: StreamsBuilder = new StreamsBuilder
        val textLines: KStream[String, String] = builder.stream[String, String]("TextLinesTopic")
        val wordCounts: KTable[String, Long] = textLines
          .flatMapValues(textLine => textLine.toLowerCase.split("\W+"))
          .groupBy((_, word) => word)
          .count(Materialized.as("counts-store"))
        wordCounts.toStream.to("WordsWithCountsTopic")
      
        val streams: KafkaStreams = new KafkaStreams(builder.build(), props)
        streams.start()
      
        sys.ShutdownHookThread {
           streams.close(Duration.ofSeconds(10))
        }
      }
      

      In the above code snippet, we don’t have to provide any Serdes, Grouped, Produced, Consumed or Joined explicitly. They will also not be dependent on any Serdes specified in the config. In fact all Serdes specified in the config will be ignored by the Scala APIs. All Serdes and Grouped, Produced, Consumed or Joined will be handled through implicit Serdes as discussed later in the Implicit Serdes section. The complete independence from configuration based Serdes is what makes this library completely typesafe. Any missing instances of Serdes, Grouped, Produced, Consumed or Joined will be flagged as a compile time error.

      Implicit Serdes

      One of the common complaints of Scala users with the Java API has been the repetitive usage of the Serdes in API invocations. Many of the APIs need to take the Serdes through abstractions like Grouped, Produced, Repartitioned, Consumed or Joined. And the user has to supply them every time through the with function of these classes.

      The library uses the power of Scala implicit parameters to alleviate this concern. As a user you can provide implicit Serdes or implicit values of Grouped, Produced, Repartitioned, Consumed or Joined once and make your code less verbose. In fact you can just have the implicit Serdes in scope and the library will make the instances of Grouped, Produced, Consumed or Joined available in scope.

      The library also bundles all implicit Serdes of the commonly used primitive types in a Scala module - so just import the module vals and have all Serdes in scope. A similar strategy of modular implicits can be adopted for any user-defined Serdes as well (User-defined Serdes are discussed in the next section).

      Here’s an example:

      // DefaultSerdes brings into scope implicit Serdes (mostly for primitives)
      // that will set up all Grouped, Produced, Consumed and Joined instances.
      // So all APIs below that accept Grouped, Produced, Consumed or Joined will
      // get these instances automatically
      import Serdes._
      
      val builder = new StreamsBuilder()
      
      val userClicksStream: KStream[String, Long] = builder.stream(userClicksTopic)
      
      val userRegionsTable: KTable[String, String] = builder.table(userRegionsTopic)
      
      // The following code fragment does not have a single instance of Grouped,
      // Produced, Consumed or Joined supplied explicitly.
      // All of them are taken care of by the implicit Serdes imported by DefaultSerdes
      val clicksPerRegion: KTable[String, Long] =
        userClicksStream
          .leftJoin(userRegionsTable)((clicks, region) => (if (region == null) "UNKNOWN" else region, clicks))
          .map((_, regionWithClicks) => regionWithClicks)
          .groupByKey
          .reduce(_ + _)
      
      clicksPerRegion.toStream.to(outputTopic)
      

      Quite a few things are going on in the above code snippet that may warrant a few lines of elaboration:

      1. The code snippet does not depend on any config defined Serdes. In fact any Serdes defined as part of the config will be ignored.
      2. All Serdes are picked up from the implicits in scope. And import Serdes._ brings all necessary Serdes in scope.
      3. This is an example of compile time type safety that we don’t have in the Java APIs.
      4. The code looks less verbose and more focused towards the actual transformation that it does on the data stream.

      User-Defined Serdes

      When the default primitive Serdes are not enough and we need to define custom Serdes, the usage is exactly the same as above. Just define the implicit Serdes and start building the stream transformation. Here’s an example with AvroSerde:

      // domain object as a case class
      case class UserClicks(clicks: Long)
      
      // An implicit Serde implementation for the values we want to
      // serialize as avro
      implicit val userClicksSerde: Serde[UserClicks] = new AvroSerde
      
      // Primitive Serdes
      import Serdes._
      
      // And then business as usual ..
      
      val userClicksStream: KStream[String, UserClicks] = builder.stream(userClicksTopic)
      
      val userRegionsTable: KTable[String, String] = builder.table(userRegionsTopic)
      
      // Compute the total per region by summing the individual click counts per region.
      val clicksPerRegion: KTable[String, Long] =
       userClicksStream
      
         // Join the stream against the table.
         .leftJoin(userRegionsTable)((clicks, region) => (if (region == null) "UNKNOWN" else region, clicks.clicks))
      
         // Change the stream from <user> -> <region, clicks> to <region> -> <clicks>
         .map((_, regionWithClicks) => regionWithClicks)
      
         // Compute the total per region by summing the individual click counts per region.
         .groupByKey
         .reduce(_ + _)
      
      // Write the (continuously updating) results to the output topic.
      clicksPerRegion.toStream.to(outputTopic)
      

      A complete example of user-defined Serdes can be found in a test class within the library.

      Previous Next

      9.7.4 - Processor API

      Processor API

      The Processor API allows developers to define and connect custom processors and to interact with state stores. With the Processor API, you can define arbitrary stream processors that process one received record at a time, and connect these processors with their associated state stores to compose the processor topology that represents a customized processing logic.

      Table of Contents

      • Overview
      • Defining a Stream Processor
      • Unit Testing Processors
      • State Stores
        • Defining and creating a State Store
        • Fault-tolerant State Stores
        • Enable or Disable Fault Tolerance of State Stores (Store Changelogs)
        • Timestamped State Stores
        • Versioned Key-Value State Stores
        • Readonly State Stores
        • Implementing Custom State Stores
      • Connecting Processors and State Stores
      • Accessing Processor Context

      Overview

      The Processor API can be used to implement both stateless as well as stateful operations, where the latter is achieved through the use of state stores.

      Tip

      Combining the DSL and the Processor API: You can combine the convenience of the DSL with the power and flexibility of the Processor API as described in the section Applying processors (Processor API integration).

      For a complete list of available API functionality, see the Streams API docs.

      Defining a Stream Processor

      A stream processor is a node in the processor topology that represents a single processing step. With the Processor API, you can define arbitrary stream processors that processes one received record at a time, and connect these processors with their associated state stores to compose the processor topology.

      You can define a customized stream processor by implementing the Processor interface, which provides the process() API method. The process() method is called on each of the received records.

      The Processor interface also has an init() method, which is called by the Kafka Streams library during task construction phase. Processor instances should perform any required initialization in this method. The init() method passes in a ProcessorContext instance, which provides access to the metadata of the currently processed record, including its source Kafka topic and partition, its corresponding message offset, and further such information. You can also use this context instance to schedule a punctuation function (via ProcessorContext#schedule()), to forward a new record to the downstream processors (via ProcessorContext#forward()), and to request a commit of the current processing progress (via ProcessorContext#commit()). Any resources you set up in init() can be cleaned up in the close() method. Note that Kafka Streams may re-use a single Processor object by calling init() on it again after close().

      The Processor interface takes four generic parameters: KIn, VIn, KOut, VOut. These define the input and output types that the processor implementation can handle. KIn and VIn define the key and value types of the Record that will be passed to process(). Likewise, KOut and VOut define the forwarded key and value types for the result Record that ProcessorContext#forward() will accept. If your processor does not forward any records at all (or if it only forwards null keys or values), a best practice is to set the output generic type argument to Void. If it needs to forward multiple types that don’t share a common superclass, you will have to set the output generic type argument to Object.

      Both the Processor#process() and the ProcessorContext#forward() methods handle records in the form of the Record<K, V> data class. This class gives you access to the main components of a Kafka record: the key, value, timestamp and headers. When forwarding records, you can use the constructor to create a new Record from scratch, or you can use the convenience builder methods to replace one of the Record’s properties and copy over the rest. For example, inputRecord.withValue(newValue) would copy the key, timestamp, and headers from inputRecord while setting the output record’s value to newValue. Note that this does not mutate inputRecord, but instead creates a shallow copy. Beware that this is only a shallow copy, so if you plan to mutate the key, value, or headers elsewhere in the program, you will want to create a deep copy of those fields yourself.

      In addition to handling incoming records via Processor#process(), you have the option to schedule periodic invocation (called “punctuation”) in your processor’s init() method by calling ProcessorContext#schedule() and passing it a Punctuator. The PunctuationType determines what notion of time is used for the punctuation scheduling: either stream-time or wall-clock-time (by default, stream-time is configured to represent event-time via TimestampExtractor). When stream-time is used, punctuate() is triggered purely by data because stream-time is determined (and advanced forward) by the timestamps derived from the input data. When there is no new input data arriving, stream-time is not advanced and thus punctuate() is not called.

      For example, if you schedule a Punctuator function every 10 seconds based on PunctuationType.STREAM_TIME and if you process a stream of 60 records with consecutive timestamps from 1 (first record) to 60 seconds (last record), then punctuate() would be called 6 times. This happens regardless of the time required to actually process those records. punctuate() would be called 6 times regardless of whether processing these 60 records takes a second, a minute, or an hour.

      When wall-clock-time (i.e. PunctuationType.WALL_CLOCK_TIME) is used, punctuate() is triggered purely by the wall-clock time. Reusing the example above, if the Punctuator function is scheduled based on PunctuationType.WALL_CLOCK_TIME, and if these 60 records were processed within 20 seconds, punctuate() is called 2 times (one time every 10 seconds). If these 60 records were processed within 5 seconds, then no punctuate() is called at all. Note that you can schedule multiple Punctuator callbacks with different PunctuationType types within the same processor by calling ProcessorContext#schedule() multiple times inside init() method.

      Attention

      Stream-time is only advanced when Streams processes records. If there are no records to process, or if Streams is waiting for new records due to the Task Idling configuration, then the stream time will not advance and punctuate() will not be triggered if PunctuationType.STREAM_TIME was specified. This behavior is independent of the configured timestamp extractor, i.e., using WallclockTimestampExtractor does not enable wall-clock triggering of punctuate().

      Example

      The following example Processor defines a simple word-count algorithm and the following actions are performed:

      • In the init() method, schedule the punctuation every 1000 time units (the time unit is normally milliseconds, which in this example would translate to punctuation every 1 second) and retrieve the local state store by its name “Counts”.

      • In the process() method, upon each received record, split the value string into words, and update their counts into the state store (we will talk about this later in this section).

      • In the punctuate() method, iterate the local state store and send the aggregated counts to the downstream processor (we will talk about downstream processors later in this section), and commit the current stream state.

        public class WordCountProcessor implements Processor<String, String, String, String> { private KeyValueStore<String, Integer> kvStore;

        @Override
        public void init(final ProcessorContext<String, String> context) {
            context.schedule(Duration.ofSeconds(1), PunctuationType.STREAM_TIME, timestamp -> {
                try (final KeyValueIterator<String, Integer> iter = kvStore.all()) {
                    while (iter.hasNext()) {
                        final KeyValue<String, Integer> entry = iter.next();
                        context.forward(new Record<>(entry.key, entry.value.toString(), timestamp));
                    }
                }
            });
            kvStore = context.getStateStore("Counts");
        }
        
        @Override
        public void process(final Record<String, String> record) {
            final String[] words = record.value().toLowerCase(Locale.getDefault()).split("\W+");
        
            for (final String word : words) {
                final Integer oldValue = kvStore.get(word);
        
                if (oldValue == null) {
                    kvStore.put(word, 1);
                } else {
                    kvStore.put(word, oldValue + 1);
                }
            }
        }
        
        @Override
        public void close() {
            // close any resources managed by this processor
            // Note: Do not close any StateStores as these are managed by the library
        }
        

        }

      Note

      Stateful processing with state stores: The WordCountProcessor defined above can access the currently received record in its process() method, and it can leverage state stores to maintain processing states to, for example, remember recently arrived records for stateful processing needs like aggregations and joins. For more information, see the state stores documentation.

      Unit Testing Processors

      Kafka Streams comes with a test-utils module to help you write unit tests for your processors here.

      State Stores

      To implement a stateful Processor, you must provide one or more state stores to the processor (stateless processors do not need state stores). State stores can be used to remember recently received input records, to track rolling aggregates, to de-duplicate input records, and more. Another feature of state stores is that they can be interactively queried from other applications, such as a NodeJS-based dashboard or a microservice implemented in Scala or Go.

      The available state store types in Kafka Streams have fault tolerance enabled by default.

      Defining and creating a State Store

      You can either use one of the available store types or implement your own custom store type. It’s common practice to leverage an existing store type via the Stores factory.

      Note that, when using Kafka Streams, you normally don’t create or instantiate state stores directly in your code. Rather, you define state stores indirectly by creating a so-called StoreBuilder. This builder is used by Kafka Streams as a factory to instantiate the actual state stores locally in application instances when and where needed.

      The following store types are available out of the box.

      Store TypeStorage EngineFault-tolerant?Description
      Persistent KeyValueStore<K, V>RocksDBYes (enabled by default)
      • The recommended store type for most use cases.

      • Stores its data on local disk.

      • Storage capacity: managed local state can be larger than the memory (heap space) of an application instance, but must fit into the available local disk space.

      • RocksDB settings can be fine-tuned, see RocksDB configuration.

      • Available store variants: timestamped key-value store, versioned key-value store, time window key-value store, session window key-value store.

      • Use persistentTimestampedKeyValueStore when you need a persistent key-(value/timestamp) store that supports put/get/delete and range queries.

      • Use persistentVersionedKeyValueStore when you need a persistent, versioned key-(value/timestamp) store that supports put/get/delete and timestamped get operations.

      • Use persistentWindowStore or persistentTimestampedWindowStore when you need a persistent timeWindowedKey-value or timeWindowedKey-(value/timestamp) store, respectively.

      • Use persistentSessionStore when you need a persistent sessionWindowedKey-value store.

        // Creating a persistent key-value store: // here, we create a KeyValueStore<String, Long> named “persistent-counts”. import org.apache.kafka.streams.state.StoreBuilder; import org.apache.kafka.streams.state.Stores;

        // Using a KeyValueStoreBuilder to build a KeyValueStore. StoreBuilder<KeyValueStore<String, Long» countStoreSupplier = Stores.keyValueStoreBuilder( Stores.persistentKeyValueStore(“persistent-counts”), Serdes.String(), Serdes.Long()); KeyValueStore<String, Long> countStore = countStoreSupplier.build();

      In-memory KeyValueStore<K, V> | - | Yes (enabled by default) |

      • Stores its data in memory.

      • Storage capacity: managed local state must fit into memory (heap space) of an application instance.

      • Useful when application instances run in an environment where local disk space is either not available or local disk space is wiped in-between app instance restarts.

      • Available store variants: time window key-value store, session window key-value store.

      • Use TimestampedKeyValueStore when you need a key-(value/timestamp) store that supports put/get/delete and range queries.

      • Use TimestampedWindowStore when you need to store windowedKey-(value/timestamp) pairs.

      • There is no built-in in-memory, versioned key-value store at this time.

        // Creating an in-memory key-value store: // here, we create a KeyValueStore<String, Long> named “inmemory-counts”. import org.apache.kafka.streams.state.StoreBuilder; import org.apache.kafka.streams.state.Stores;

        // Using a KeyValueStoreBuilder to build a KeyValueStore. StoreBuilder<KeyValueStore<String, Long» countStoreSupplier = Stores.keyValueStoreBuilder( Stores.inMemoryKeyValueStore(“inmemory-counts”), Serdes.String(), Serdes.Long()); KeyValueStore<String, Long> countStore = countStoreSupplier.build();

      Fault-tolerant State Stores

      To make state stores fault-tolerant and to allow for state store migration without data loss, a state store can be continuously backed up to a Kafka topic behind the scenes. For example, to migrate a stateful stream task from one machine to another when elastically adding or removing capacity from your application. This topic is sometimes referred to as the state store’s associated changelog topic , or its changelog. For example, if you experience machine failure, the state store and the application’s state can be fully restored from its changelog. You can enable or disable this backup feature for a state store.

      Fault-tolerant state stores are backed by a compacted changelog topic. The purpose of compacting this topic is to prevent the topic from growing indefinitely, to reduce the storage consumed in the associated Kafka cluster, and to minimize recovery time if a state store needs to be restored from its changelog topic.

      Fault-tolerant windowed state stores are backed by a topic that uses both compaction and deletion. Because of the structure of the message keys that are being sent to the changelog topics, this combination of deletion and compaction is required for the changelog topics of window stores. For window stores, the message keys are composite keys that include the “normal” key and window timestamps. For these types of composite keys it would not be sufficient to only enable compaction to prevent a changelog topic from growing out of bounds. With deletion enabled, old windows that have expired will be cleaned up by Kafka’s log cleaner as the log segments expire. The default retention setting is Windows#maintainMs() + 1 day. You can override this setting by specifying StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG in the StreamsConfig.

      When you open an Iterator from a state store you must call close() on the iterator when you are done working with it to reclaim resources; or you can use the iterator from within a try-with-resources statement. If you do not close an iterator, you may encounter an OOM error.

      Enable or Disable Fault Tolerance of State Stores (Store Changelogs)

      You can enable or disable fault tolerance for a state store by enabling or disabling the change logging of the store through enableLogging() and disableLogging(). You can also fine-tune the associated topic’s configuration if needed.

      Example for disabling fault-tolerance:

      import org.apache.kafka.streams.state.StoreBuilder;
      import org.apache.kafka.streams.state.Stores;
      
      StoreBuilder<KeyValueStore<String, Long>> countStoreSupplier = Stores.keyValueStoreBuilder(
        Stores.persistentKeyValueStore("Counts"),
          Serdes.String(),
          Serdes.Long())
        .withLoggingDisabled(); // disable backing up the store to a changelog topic
      

      Attention

      If the changelog is disabled then the attached state store is no longer fault tolerant and it can’t have any standby replicas.

      Here is an example for enabling fault tolerance, with additional changelog-topic configuration: You can add any log config from kafka.log.LogConfig. Unrecognized configs will be ignored.

      import org.apache.kafka.streams.state.StoreBuilder;
      import org.apache.kafka.streams.state.Stores;
      
      Map<String, String> changelogConfig = new HashMap();
      // override min.insync.replicas
      changelogConfig.put(TopicConfig.MIN_IN_SYNC_REPLICAS_CONFIG, "1")
      
      StoreBuilder<KeyValueStore<String, Long>> countStoreSupplier = Stores.keyValueStoreBuilder(
        Stores.persistentKeyValueStore("Counts"),
          Serdes.String(),
          Serdes.Long())
        .withLoggingEnabled(changelogConfig); // enable changelogging, with custom changelog settings
      

      Timestamped State Stores

      KTables always store timestamps by default. A timestamped state store improves stream processing semantics and enables handling out-of-order data in source KTables, detecting out-of-order joins and aggregations, and getting the timestamp of the latest update in an Interactive Query.

      You can query timestamped state stores both with and without a timestamp.

      Upgrade note: All users upgrade with a single rolling bounce per instance.

      • For Processor API users, nothing changes in existing applications, and you have the option of using the timestamped stores.
      • For DSL operators, store data is upgraded lazily in the background.
      • No upgrade happens if you provide a custom XxxBytesStoreSupplier, but you can opt-in by implementing the TimestampedBytesStore interface. In this case, the old format is retained, and Streams uses a proxy store that removes/adds timestamps on read/write.

      Versioned Key-Value State Stores

      Versioned key-value state stores are available since Kafka Streams 3.5. Rather than storing a single record version (value and timestamp) per key, versioned state stores may store multiple record versions per key. This allows versioned state stores to support timestamped retrieval operations to return the latest record (per key) as of a specified timestamp.

      You can create a persistent, versioned state store by passing a VersionedBytesStoreSupplier to the versionedKeyValueStoreBuilder, or by implementing your own VersionedKeyValueStore.

      Each versioned store has an associated, fixed-duration history retention parameter which specifies long old record versions should be kept for. In particular, a versioned store guarantees to return accurate results for timestamped retrieval operations where the timestamp being queried is within history retention of the current observed stream time.

      History retention also doubles as its grace period , which determines how far back in time out-of-order writes to the store will be accepted. A versioned store will not accept writes (inserts, updates, or deletions) if the timestamp associated with the write is older than the current observed stream time by more than the grace period. Stream time in this context is tracked per-partition, rather than per-key, which means it’s important that grace period (i.e., history retention) be set high enough to accommodate a record with one key arriving out-of-order relative to a record for another key.

      Because the memory footprint of versioned key-value stores is higher than that of non-versioned key-value stores, you may want to adjust your RocksDB memory settings accordingly. Benchmarking your application with versioned stores is also advised as performance is expected to be worse than when using non-versioned stores.

      Versioned stores do not support caching or interactive queries at this time. Also, window stores and global tables may not be versioned.

      Upgrade note: Versioned state stores are opt-in only; no automatic upgrades from non-versioned to versioned stores will take place.

      Upgrades are supported from persistent, non-versioned key-value stores to persistent, versioned key-value stores as long as the original store has the same changelog topic format as the versioned store being upgraded to. Both persistent key-value stores and timestamped key-value stores share the same changelog topic format as persistent versioned key-value stores, and therefore both are eligible for upgrades.

      If you wish to upgrade an application using persistent, non-versioned key-value stores to use persistent, versioned key-value stores instead, you can perform the following procedure:

      • Stop all application instances, and clear any local state directories for the store(s) being upgraded.
      • Update your application code to use versioned stores where desired.
      • Update your changelog topic configs, for the relevant state stores, to set the value of min.compaction.lag.ms to be at least your desired history retention. History retention plus one day is recommended as buffer for the use of broker wall clock time during compaction.
      • Restart your application instances and allow time for the versioned stores to rebuild state from changelog.

      ReadOnly State Stores

      A read-only state store materialized the data from its input topic. It also uses the input topic for fault-tolerance, and thus does not have an additional changelog topic (the input topic is re-used as changelog). Thus, the input topic should be configured with log compaction. Note that no other processor should modify the content of the state store, and the only writer should be the associated “state update processor”; other processors may read the content of the read-only store.

      note: beware of the partitioning requirements when using read-only state stores for lookups during processing. You might want to make sure the original changelog topic is co-partitioned with the processors reading the read-only statestore.

      Implementing Custom State Stores

      You can use the built-in state store types or implement your own. The primary interface to implement for the store is org.apache.kafka.streams.processor.StateStore. Kafka Streams also has a few extended interfaces such as KeyValueStore and VersionedKeyValueStore.

      Note that your customized org.apache.kafka.streams.processor.StateStore implementation also needs to provide the logic on how to restore the state via the org.apache.kafka.streams.processor.StateRestoreCallback or org.apache.kafka.streams.processor.BatchingStateRestoreCallback interface. Details on how to instantiate these interfaces can be found in the javadocs.

      You also need to provide a “builder” for the store by implementing the org.apache.kafka.streams.state.StoreBuilder interface, which Kafka Streams uses to create instances of your store.

      Accessing Processor Context

      As we have mentioned in the Defining a Stream Processor section, a ProcessorContext control the processing workflow, such as scheduling a punctuation function, and committing the current processed state.

      This object can also be used to access the metadata related with the application like applicationId, taskId, and stateDir, and also RecordMetadata such as topic, partition, and offset.

      Connecting Processors and State Stores

      Now that a processor (WordCountProcessor) and the state stores have been defined, you can construct the processor topology by connecting these processors and state stores together by using the Topology instance. In addition, you can add source processors with the specified Kafka topics to generate input data streams into the topology, and sink processors with the specified Kafka topics to generate output data streams out of the topology.

      Here is an example implementation:

      Topology builder = new Topology();
      // add the source processor node that takes Kafka topic "source-topic" as input
      builder.addSource("Source", "source-topic")
          // add the WordCountProcessor node which takes the source processor as its upstream processor
          .addProcessor("Process", () -> new WordCountProcessor(), "Source")
          // add the count store associated with the WordCountProcessor processor
          .addStateStore(countStoreBuilder, "Process")
          // add the sink processor node that takes Kafka topic "sink-topic" as output
          // and the WordCountProcessor node as its upstream processor
          .addSink("Sink", "sink-topic", "Process");
      

      Here is a quick explanation of this example:

      • A source processor node named "Source" is added to the topology using the addSource method, with one Kafka topic "source-topic" fed to it.
      • A processor node named "Process" with the pre-defined WordCountProcessor logic is then added as the downstream processor of the "Source" node using the addProcessor method.
      • A predefined persistent key-value state store is created and associated with the "Process" node, using countStoreBuilder.
      • A sink processor node is then added to complete the topology using the addSink method, taking the "Process" node as its upstream processor and writing to a separate "sink-topic" Kafka topic (note that users can also use another overloaded variant of addSink to dynamically determine the Kafka topic to write to for each received record from the upstream processor).

      In some cases, it may be more convenient to add and connect a state store at the same time as you add the processor to the topology. This can be done by implementing ConnectedStoreProvider#stores() on the ProcessorSupplier instead of calling Topology#addStateStore(), like this:

      Topology builder = new Topology();
      // add the source processor node that takes Kafka "source-topic" as input
      builder.addSource("Source", "source-topic")
          // add the WordCountProcessor node which takes the source processor as its upstream processor.
          // the ProcessorSupplier provides the count store associated with the WordCountProcessor
          .addProcessor("Process", new ProcessorSupplier<String, String, String, String>() {
              public Processor<String, String, String, String> get() {
                  return new WordCountProcessor();
              }
      
              public Set<StoreBuilder<?>> stores() {
                  final StoreBuilder<KeyValueStore<String, Long>> countsStoreBuilder =
                      Stores
                          .keyValueStoreBuilder(
                              Stores.persistentKeyValueStore("Counts"),
                              Serdes.String(),
                              Serdes.Long()
                          );
                  return Collections.singleton(countsStoreBuilder);
              }
          }, "Source")
          // add the sink processor node that takes Kafka topic "sink-topic" as output
          // and the WordCountProcessor node as its upstream processor
          .addSink("Sink", "sink-topic", "Process");
      

      This allows for a processor to “own” state stores, effectively encapsulating their usage from the user wiring the topology. Multiple processors that share a state store may provide the same store with this technique, as long as the StoreBuilder is the same instance.

      In these topologies, the "Process" stream processor node is considered a downstream processor of the "Source" node, and an upstream processor of the "Sink" node. As a result, whenever the "Source" node forwards a newly fetched record from Kafka to its downstream "Process" node, the WordCountProcessor#process() method is triggered to process the record and update the associated state store. Whenever context#forward() is called in the WordCountProcessor#punctuate() method, the aggregate records will be sent via the "Sink" processor node to the Kafka topic "sink-topic". Note that in the WordCountProcessor implementation, you must refer to the same store name "Counts" when accessing the key-value store, otherwise an exception will be thrown at runtime, indicating that the state store cannot be found. If the state store is not associated with the processor in the Topology code, accessing it in the processor’s init() method will also throw an exception at runtime, indicating the state store is not accessible from this processor.

      Note that the Topology#addProcessor function takes a ProcessorSupplier as argument, and that the supplier pattern requires that a new Processor instance is returned each time ProcessorSupplier#get() is called. Creating a single Processor object and returning the same object reference in ProcessorSupplier#get() would be a violation of the supplier pattern and leads to runtime exceptions. So remember not to provide a singleton Processor instance to Topology. The ProcessorSupplier should always generate a new instance each time ProcessorSupplier#get() gets called.

      Now that you have fully defined your processor topology in your application, you can proceed to running the Kafka Streams application.

      Previous Next

      9.7.5 - Naming Operators in a Streams DSL application

      Developer Guide for Kafka Streams

      Naming Operators in a Kafka Streams DSL Application

      You now can give names to processors when using the Kafka Streams DSL. In the PAPI there are Processors and State Stores and you are required to explicitly name each one.

      At the DSL layer, there are operators. A single DSL operator may compile down to multiple Processors and State Stores, and if required repartition topics. But with the Kafka Streams DSL, all these names are generated for you. There is a relationship between the generated processor name state store names (hence changelog topic names) and repartition topic names. Note, that the names of state stores and changelog/repartition topics are “stateful” while processor names are “stateless”.

      This distinction of stateful vs. stateless names has important implications when updating your topology. While the internal naming makes creating a topology with the DSL much more straightforward, there are a couple of trade-offs. The first trade-off is what we could consider a readability issue. The other more severe trade-off is the shifting of names due to the relationship between the DSL operator and the generated Processors, State Stores changelog topics and repartition topics.

      Readability Issues

      By saying there is a readability trade-off, we are referring to viewing a description of the topology. When you render the string description of your topology via the Topology#describe() method, you can see what the processor is, but you don’t have any context for its business purpose. For example, consider the following simple topology:

      KStream<String,String> stream = builder.stream("input");
      stream.filter((k,v) -> !v.equals("invalid_txn"))
      	  .mapValues((v) -> v.substring(0,5))
      	  .to("output");
      

      Running Topology#describe() yields this string:

      Topologies:
         Sub-topology: 0
      	Source: KSTREAM-SOURCE-0000000000 (topics: [input])
      	  --> KSTREAM-FILTER-0000000001
      	Processor: KSTREAM-FILTER-0000000001 (stores: [])
      	  --> KSTREAM-MAPVALUES-0000000002
      	  <-- KSTREAM-SOURCE-0000000000
      	Processor: KSTREAM-MAPVALUES-0000000002 (stores: [])
      	  --> KSTREAM-SINK-0000000003
      	  <-- KSTREAM-FILTER-0000000001
      	Sink: KSTREAM-SINK-0000000003 (topic: output)
      	  <-- KSTREAM-MAPVALUES-0000000002
      

      From this report, you can see what the different operators are, but what is the broader context here? For example, consider KSTREAM-FILTER-0000000001, we can see that it’s a filter operation, which means that records are dropped that don’t match the given predicate. But what is the meaning of the predicate? Additionally, you can see the topic names of the source and sink nodes, but what if the topics aren’t named in a meaningful way? Then you’re left to guess the business purpose behind these topics.

      Also notice the numbering here: the source node is suffixed with 0000000000 indicating it’s the first processor in the topology. The filter is suffixed with 0000000001, indicating it’s the second processor in the topology. In Kafka Streams, there are now overloaded methods for both KStream and KTable that accept a new parameter Named. By using the Named class DSL users can provide meaningful names to the processors in their topology.

      Now let’s take a look at your topology with all the processors named:

      KStream<String,String> stream =
      builder.stream("input", Consumed.as("Customer_transactions_input_topic"));
      stream.filter((k,v) -> !v.equals("invalid_txn"), Named.as("filter_out_invalid_txns"))
      	  .mapValues((v) -> v.substring(0,5), Named.as("Map_values_to_first_6_characters"))
      	  .to("output", Produced.as("Mapped_transactions_output_topic"));
      
      
      Topologies:
         Sub-topology: 0
      	Source: Customer_transactions_input_topic (topics: [input])
      	  --> filter_out_invalid_txns
      	Processor: filter_out_invalid_txns (stores: [])
      	  --> Map_values_to_first_6_characters
      	  <-- Customer_transactions_input_topic
      	Processor: Map_values_to_first_6_characters (stores: [])
      	  --> Mapped_transactions_output_topic
      	  <-- filter_out_invalid_txns
      	Sink: Mapped_transactions_output_topic (topic: output)
      	  <-- Map_values_to_first_6_characters
      

      Now you can look at the topology description and easily understand what role each processor plays in the topology. But there’s another reason for naming your processor nodes when you have stateful operators that remain between restarts of your Kafka Streams applications, state stores, changelog topics, and repartition topics.

      Changing Names

      Generated names are numbered where they are built in the topology. The name generation strategy is KSTREAM|KTABLE->operator name<->number suffix<. The number is a globally incrementing number that represents the operator’s order in the topology. The generated number is prefixed with a varying number of “0"s to create a string that is consistently 10 characters long. This means that if you add/remove or shift the order of operations, the position of the processor shifts, which shifts the name of the processor. Since most processors exist in memory only, this name shifting presents no issue for many topologies. But the name shifting does have implications for topologies with stateful operators or repartition topics. Here’s a different topology with some state:

      KStream<String,String> stream = builder.stream("input");
       stream.groupByKey()
      	   .count()
      	   .toStream()
      	   .to("output");
      

      This topology description yields the following:

      Topologies:
         Sub-topology: 0
      	Source: KSTREAM-SOURCE-0000000000 (topics: [input])
      	 --> KSTREAM-AGGREGATE-0000000002
      	Processor: KSTREAM-AGGREGATE-0000000002 (stores: [KSTREAM-AGGREGATE-STATE-STORE-0000000001])
      	 --> KTABLE-TOSTREAM-0000000003
      	 <-- KSTREAM-SOURCE-0000000000
      	Processor: KTABLE-TOSTREAM-0000000003 (stores: [])
      	 --> KSTREAM-SINK-0000000004
      	 <-- KSTREAM-AGGREGATE-0000000002
      	Sink: KSTREAM-SINK-0000000004 (topic: output)
      	 <-- KTABLE-TOSTREAM-0000000003
      

      You can see from the topology description above that the state store is named KSTREAM-AGGREGATE-STATE-STORE-0000000002. Here’s what happens when you add a filter to keep some of the records out of the aggregation:

      KStream<String,String> stream = builder.stream("input");
      stream.filter((k,v)-> v !=null && v.length() >= 6 )
            .groupByKey()
            .count()
            .toStream()
            .to("output");
      

      And the corresponding topology:

      Topologies:
      	Sub-topology: 0
      	 Source: KSTREAM-SOURCE-0000000000 (topics: [input])
      	  --> KSTREAM-FILTER-0000000001
      	 Processor: KSTREAM-FILTER-0000000001 (stores: [])
      	   --> KSTREAM-AGGREGATE-0000000003
      	   <-- KSTREAM-SOURCE-0000000000
      	 Processor: KSTREAM-AGGREGATE-0000000003 (stores: [KSTREAM-AGGREGATE-STATE-STORE-0000000002])
      	   --> KTABLE-TOSTREAM-0000000004
      	   <-- KSTREAM-FILTER-0000000001
      	 Processor: KTABLE-TOSTREAM-0000000004 (stores: [])
      	   --> KSTREAM-SINK-0000000005
      	   <-- KSTREAM-AGGREGATE-0000000003
      	  Sink: KSTREAM-SINK-0000000005 (topic: output)
      	   <-- KTABLE-TOSTREAM-0000000004
      

      Notice that since you’ve added an operation before the count operation, the state store (and the changelog topic) names have changed. This name change means you can’t do a rolling re-deployment of your updated topology. Also, you must use the Streams Reset Tool to re-calculate the aggregations, because the changelog topic has changed on start-up and the new changelog topic contains no data. Fortunately, there’s an easy solution to remedy this situation. Give the state store a user-defined name instead of relying on the generated one, so you don’t have to worry about topology changes shifting the name of the state store. You’ve had the ability to name repartition topics with the Joined, StreamJoined, andGrouped classes, and name state store and changelog topics with Materialized. But it’s worth reiterating the importance of naming these DSL topology operations again. Here’s how your DSL code looks now giving a specific name to your state store:

      KStream<String,String> stream = builder.stream("input");
      stream.filter((k, v) -> v != null && v.length() >= 6)
      	  .groupByKey()
      	  .count(Materialized.as("Purchase_count_store"))
      	  .toStream()
      	  .to("output");
      

      And here’s the topology

      Topologies:
         Sub-topology: 0
      	Source: KSTREAM-SOURCE-0000000000 (topics: [input])
      	  --> KSTREAM-FILTER-0000000001
      	Processor: KSTREAM-FILTER-0000000001 (stores: [])
      	  --> KSTREAM-AGGREGATE-0000000002
      	  <-- KSTREAM-SOURCE-0000000000
      	Processor: KSTREAM-AGGREGATE-0000000002 (stores: [Purchase_count_store])
      	  --> KTABLE-TOSTREAM-0000000003
      	  <-- KSTREAM-FILTER-0000000001
      	Processor: KTABLE-TOSTREAM-0000000003 (stores: [])
      	  --> KSTREAM-SINK-0000000004
      	  <-- KSTREAM-AGGREGATE-0000000002
      	Sink: KSTREAM-SINK-0000000004 (topic: output)
      	  <-- KTABLE-TOSTREAM-0000000003
      

      Now, even though you’ve added processors before your state store, the store name and its changelog topic names don’t change. This makes your topology more robust and resilient to changes made by adding or removing processors.

      Conclusion

      It’s a good practice to name your processing nodes when using the DSL, and it’s even more important to do this when you have “stateful” processors your application such as repartition topics and state stores (and the accompanying changelog topics).

      Here are a couple of points to remember when naming your DSL topology:

      1. If you have an existing topology and you haven’t named your state stores (and changelog topics) and repartition topics, we recommended that you do so. But this will be a topology breaking change, so you’ll need to shut down all application instances, make the changes, and run the Streams Reset Tool. Although this may be inconvenient at first, it’s worth the effort to protect your application from unexpected errors due to topology changes.
      2. If you have a new topology , make sure you name the persistent parts of your topology: state stores (changelog topics) and repartition topics. This way, when you deploy your application, you’re protected from topology changes that otherwise would break your Kafka Streams application. If you don’t want to add names to stateless processors at first, that’s fine as you can always go back and add the names later.
      Here’s a quick reference on naming the critical parts of your Kafka Streams application to prevent topology name changes from breaking your application: OperationNaming Class
      Aggregation repartition topicsGrouped
      KStream-KStream Join repartition topicsStreamJoined
      KStream-KTable Join repartition topicJoined
      KStream-KStream Join state storesStreamJoined
      State Stores (for aggregations and KTable-KTable joins)Materialized
      Stream/Table non-stateful operationsNamed

      9.7.6 - Data Types and Serialization

      Data Types and Serialization

      Every Kafka Streams application must provide Serdes (Serializer/Deserializer) for the data types of record keys and record values (e.g. java.lang.String) to materialize the data when necessary. Operations that require such Serdes information include: stream(), table(), to(), repartition(), groupByKey(), groupBy().

      You can provide Serdes by using either of these methods, but you must use at least one:

      • By setting default Serdes in the java.util.Properties config instance.
      • By specifying explicit Serdes when calling the appropriate API methods, thus overriding the defaults.

      Table of Contents

      • Configuring Serdes
      • Overriding default Serdes
      • Available Serdes
        • Primitive and basic types
        • JSON
        • Implementing custom serdes
      • Kafka Streams DSL for Scala Implicit Serdes

      Configuring Serdes

      Serdes specified in the Streams configuration are used as the default in your Kafka Streams application. Because this config’s default is null, you must either set a default Serde by using this configuration or pass in Serdes explicitly, as described below.

      import org.apache.kafka.common.serialization.Serdes;
      import org.apache.kafka.streams.StreamsConfig;
      
      Properties settings = new Properties();
      // Default serde for keys of data records (here: built-in serde for String type)
      settings.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
      // Default serde for values of data records (here: built-in serde for Long type)
      settings.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass().getName());
      

      Overriding default Serdes

      You can also specify Serdes explicitly by passing them to the appropriate API methods, which overrides the default serde settings:

      import org.apache.kafka.common.serialization.Serde;
      import org.apache.kafka.common.serialization.Serdes;
      
      final Serde<String> stringSerde = Serdes.String();
      final Serde<Long> longSerde = Serdes.Long();
      
      // The stream userCountByRegion has type `String` for record keys (for region)
      // and type `Long` for record values (for user counts).
      KStream<String, Long> userCountByRegion = ...;
      userCountByRegion.to("RegionCountsTopic", Produced.with(stringSerde, longSerde));
      

      If you want to override serdes selectively, i.e., keep the defaults for some fields, then don’t specify the serde whenever you want to leverage the default settings:

      import org.apache.kafka.common.serialization.Serde;
      import org.apache.kafka.common.serialization.Serdes;
      
      // Use the default serializer for record keys (here: region as String) by not specifying the key serde,
      // but override the default serializer for record values (here: userCount as Long).
      final Serde<Long> longSerde = Serdes.Long();
      KStream<String, Long> userCountByRegion = ...;
      userCountByRegion.to("RegionCountsTopic", Produced.valueSerde(Serdes.Long()));
      

      If some of your incoming records are corrupted or ill-formatted, they will cause the deserializer class to report an error. Since 1.0.x we have introduced an DeserializationExceptionHandler interface which allows you to customize how to handle such records. The customized implementation of the interface can be specified via the StreamsConfig. For more details, please feel free to read the Configuring a Streams Application section.

      Available Serdes

      Primitive and basic types

      Apache Kafka includes several built-in serde implementations for Java primitives and basic types such as byte[] in its kafka-clients Maven artifact:

      <dependency>
          <groupId>org.apache.kafka</groupId>
          <artifactId>kafka-clients</artifactId>
          <version>2.8.0</version>
      </dependency>
      

      This artifact provides the following serde implementations under the package org.apache.kafka.common.serialization, which you can leverage when e.g., defining default serializers in your Streams configuration.

      Data typeSerde
      byte[]Serdes.ByteArray(), Serdes.Bytes() (see tip below)
      ByteBufferSerdes.ByteBuffer()
      DoubleSerdes.Double()
      IntegerSerdes.Integer()
      LongSerdes.Long()
      StringSerdes.String()
      UUIDSerdes.UUID()
      VoidSerdes.Void()
      ListSerdes.ListSerde()
      BooleanSerdes.Boolean()

      Tip

      Bytes is a wrapper for Java’s byte[] (byte array) that supports proper equality and ordering semantics. You may want to consider using Bytes instead of byte[] in your applications.

      JSON

      You can use JsonSerializer and JsonDeserializer from Kafka Connect to construct JSON compatible serializers and deserializers using Serdes.serdeFrom(<serializerInstance>, <deserializerInstance>). Note, that Kafka Connect’s Json (de)serializer requires Java 17.

      Implementing custom Serdes

      If you need to implement custom Serdes, your best starting point is to take a look at the source code references of existing Serdes (see previous section). Typically, your workflow will be similar to:

      1. Write a serializer for your data type T by implementing org.apache.kafka.common.serialization.Serializer.
      2. Write a deserializer for T by implementing org.apache.kafka.common.serialization.Deserializer.
      3. Write a serde for T by implementing org.apache.kafka.common.serialization.Serde, which you either do manually (see existing Serdes in the previous section) or by leveraging helper functions in Serdes such as Serdes.serdeFrom(Serializer<T>, Deserializer<T>). Note that you will need to implement your own class (that has no generic types) if you want to use your custom serde in the configuration provided to KafkaStreams. If your serde class has generic types or you use Serdes.serdeFrom(Serializer<T>, Deserializer<T>), you can pass your serde only via methods calls (for example builder.stream("topicName", Consumed.with(...))).

      Kafka Streams DSL for Scala Implicit Serdes

      When using the Kafka Streams DSL for Scala you’re not required to configure a default Serdes. In fact, it’s not supported. Serdes are instead provided implicitly by default implementations for common primitive datatypes. See the Implicit Serdes and User-Defined Serdes sections in the DSL API documentation for details

      Previous Next

      9.7.7 - Testing a Streams Application

      Testing Kafka Streams

      Table of Contents

      • Importing the test utilities
      • Testing Streams applications
      • Unit testing Processors

      Importing the test utilities

      To test a Kafka Streams application, Kafka provides a test-utils artifact that can be added as regular dependency to your test code base. Example pom.xml snippet when using Maven:

      <dependency>
          <groupId>org.apache.kafka</groupId>
          <artifactId>kafka-streams-test-utils</artifactId>
          <version>4.0.0</version>
          <scope>test</scope>
      </dependency>
      

      Testing a Streams application

      The test-utils package provides a TopologyTestDriver that can be used pipe data through a Topology that is either assembled manually using Processor API or via the DSL using StreamsBuilder. The test driver simulates the library runtime that continuously fetches records from input topics and processes them by traversing the topology. You can use the test driver to verify that your specified processor topology computes the correct result with the manually piped in data records. The test driver captures the results records and allows to query its embedded state stores.

      // Processor API
      Topology topology = new Topology();
      topology.addSource("sourceProcessor", "input-topic");
      topology.addProcessor("processor", ..., "sourceProcessor");
      topology.addSink("sinkProcessor", "output-topic", "processor");
      // or
      // using DSL
      StreamsBuilder builder = new StreamsBuilder();
      builder.stream("input-topic").filter(...).to("output-topic");
      Topology topology = builder.build();
      
      // create test driver
      TopologyTestDriver testDriver = new TopologyTestDriver(topology);
      

      With the test driver you can create TestInputTopic giving topic name and the corresponding serializers. TestInputTopic provides various methods to pipe new message values, keys and values, or list of KeyValue objects.

      TestInputTopic<String, Long> inputTopic = testDriver.createInputTopic("input-topic", stringSerde.serializer(), longSerde.serializer());
      inputTopic.pipeInput("key", 42L);
      

      To verify the output, you can use TestOutputTopic where you configure the topic and the corresponding deserializers during initialization. It offers helper methods to read only certain parts of the result records or the collection of records. For example, you can validate returned KeyValue with standard assertions if you only care about the key and value, but not the timestamp of the result record.

      TestOutputTopic<String, Long> outputTopic = testDriver.createOutputTopic("output-topic", stringSerde.deserializer(), longSerde.deserializer());
      assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("key", 42L)));
      

      TopologyTestDriver supports punctuations, too. Event-time punctuations are triggered automatically based on the processed records’ timestamps. Wall-clock-time punctuations can also be triggered by advancing the test driver’s wall-clock-time (the driver mocks wall-clock-time internally to give users control over it).

      testDriver.advanceWallClockTime(Duration.ofSeconds(20));
      

      Additionally, you can access state stores via the test driver before or after a test. Accessing stores before a test is useful to pre-populate a store with some initial values. After data was processed, expected updates to the store can be verified.

      KeyValueStore store = testDriver.getKeyValueStore("store-name");
      

      Note, that you should always close the test driver at the end to make sure all resources are release properly.

      testDriver.close();
      

      Example

      The following example demonstrates how to use the test driver and helper classes. The example creates a topology that computes the maximum value per key using a key-value-store. While processing, no output is generated, but only the store is updated. Output is only sent downstream based on event-time and wall-clock punctuations.

      private TopologyTestDriver testDriver;
      private TestInputTopic<String, Long> inputTopic;
      private TestOutputTopic<String, Long> outputTopic;
      private KeyValueStore<String, Long> store;
      
      private Serde<String> stringSerde = new Serdes.StringSerde();
      private Serde<Long> longSerde = new Serdes.LongSerde();
      
      @Before
      public void setup() {
          Topology topology = new Topology();
          topology.addSource("sourceProcessor", "input-topic");
          topology.addProcessor("aggregator", new CustomMaxAggregatorSupplier(), "sourceProcessor");
          topology.addStateStore(
              Stores.keyValueStoreBuilder(
                  Stores.inMemoryKeyValueStore("aggStore"),
                  Serdes.String(),
                  Serdes.Long()).withLoggingDisabled(), // need to disable logging to allow store pre-populating
              "aggregator");
          topology.addSink("sinkProcessor", "result-topic", "aggregator");
      
          // setup test driver
          Properties props = new Properties();
          props.setProperty(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
          props.setProperty(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass().getName());
          testDriver = new TopologyTestDriver(topology, props);
      
          // setup test topics
          inputTopic = testDriver.createInputTopic("input-topic", stringSerde.serializer(), longSerde.serializer());
          outputTopic = testDriver.createOutputTopic("result-topic", stringSerde.deserializer(), longSerde.deserializer());
      
          // pre-populate store
          store = testDriver.getKeyValueStore("aggStore");
          store.put("a", 21L);
      }
      
      @After
      public void tearDown() {
          testDriver.close();
      }
      
      @Test
      public void shouldFlushStoreForFirstInput() {
          inputTopic.pipeInput("a", 1L);
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("a", 21L)));
          assertThat(outputTopic.isEmpty(), is(true));
      }
      
      @Test
      public void shouldNotUpdateStoreForSmallerValue() {
          inputTopic.pipeInput("a", 1L);
          assertThat(store.get("a"), equalTo(21L));
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("a", 21L)));
          assertThat(outputTopic.isEmpty(), is(true));
      }
      
      @Test
      public void shouldNotUpdateStoreForLargerValue() {
          inputTopic.pipeInput("a", 42L);
          assertThat(store.get("a"), equalTo(42L));
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("a", 42L)));
          assertThat(outputTopic.isEmpty(), is(true));
      }
      
      @Test
      public void shouldUpdateStoreForNewKey() {
          inputTopic.pipeInput("b", 21L);
          assertThat(store.get("b"), equalTo(21L));
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("a", 21L)));
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("b", 21L)));
          assertThat(outputTopic.isEmpty(), is(true));
      }
      
      @Test
      public void shouldPunctuateIfEvenTimeAdvances() {
          final Instant recordTime = Instant.now();
          inputTopic.pipeInput("a", 1L,  recordTime);
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("a", 21L)));
      
          inputTopic.pipeInput("a", 1L,  recordTime);
          assertThat(outputTopic.isEmpty(), is(true));
      
          inputTopic.pipeInput("a", 1L, recordTime.plusSeconds(10L));
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("a", 21L)));
          assertThat(outputTopic.isEmpty(), is(true));
      }
      
      @Test
      public void shouldPunctuateIfWallClockTimeAdvances() {
          testDriver.advanceWallClockTime(Duration.ofSeconds(60));
          assertThat(outputTopic.readKeyValue(), equalTo(new KeyValue<>("a", 21L)));
          assertThat(outputTopic.isEmpty(), is(true));
      }
      
      public class CustomMaxAggregatorSupplier implements ProcessorSupplier<String, Long> {
          @Override
          public Processor<String, Long> get() {
              return new CustomMaxAggregator();
          }
      }
      
      public class CustomMaxAggregator implements Processor<String, Long> {
          ProcessorContext context;
          private KeyValueStore<String, Long> store;
      
          @SuppressWarnings("unchecked")
          @Override
          public void init(ProcessorContext context) {
              this.context = context;
              context.schedule(Duration.ofSeconds(60), PunctuationType.WALL_CLOCK_TIME, time -> flushStore());
              context.schedule(Duration.ofSeconds(10), PunctuationType.STREAM_TIME, time -> flushStore());
              store = (KeyValueStore<String, Long>) context.getStateStore("aggStore");
          }
      
          @Override
          public void process(String key, Long value) {
              Long oldValue = store.get(key);
              if (oldValue == null || value > oldValue) {
                  store.put(key, value);
              }
          }
      
          private void flushStore() {
              KeyValueIterator<String, Long> it = store.all();
              while (it.hasNext()) {
                  KeyValue<String, Long> next = it.next();
                  context.forward(next.key, next.value);
              }
          }
      
          @Override
          public void close() {}
      }
      

      Unit Testing Processors

      If you write a Processor, you will want to test it.

      Because the Processor forwards its results to the context rather than returning them, Unit testing requires a mocked context capable of capturing forwarded data for inspection. For this reason, we provide a MockProcessorContext in test-utils.

      Construction

      To begin with, instantiate your processor and initialize it with the mock context:

      final Processor processorUnderTest = ...;
      final MockProcessorContext<String, Long> context = new MockProcessorContext<>();
      processorUnderTest.init(context);
      

      If you need to pass configuration to your processor or set the default serdes, you can create the mock with config:

      final Properties props = new Properties();
      props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
      props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.Long().getClass());
      props.put("some.other.config", "some config value");
      final MockProcessorContext<String, Long> context = new MockProcessorContext<>(props);
      

      Captured data

      The mock will capture any values that your processor forwards. You can make assertions on them:

      processorUnderTest.process("key", "value");
      
      final Iterator<CapturedForward<? extends String, ? extends Long>> forwarded = context.forwarded().iterator();
      assertEquals(forwarded.next().record(), new Record<>(..., ...));
      assertFalse(forwarded.hasNext());
      
      // you can reset forwards to clear the captured data. This may be helpful in constructing longer scenarios.
      context.resetForwards();
      
      assertEquals(context.forwarded().size(), 0);
      

      If your processor forwards to specific child processors, you can query the context for captured data by child name:

      final List<CapturedForward<? extends String, ? extends Long>> captures = context.forwarded("childProcessorName");
      

      The mock also captures whether your processor has called commit() on the context:

      assertTrue(context.committed());
      
      // commit captures can also be reset.
      context.resetCommit();
      
      assertFalse(context.committed());
      

      Setting record metadata

      In case your processor logic depends on the record metadata (topic, partition, offset), you can set them on the context:

      context.setRecordMetadata("topicName", /*partition*/ 0, /*offset*/ 0L);
      

      Once these are set, the context will continue returning the same values, until you set new ones.

      State stores

      In case your punctuator is stateful, the mock context allows you to register state stores. You’re encouraged to use a simple in-memory store of the appropriate type (KeyValue, Windowed, or Session), since the mock context does not manage changelogs, state directories, etc.

      final KeyValueStore<String, Integer> store =
          Stores.keyValueStoreBuilder(
                  Stores.inMemoryKeyValueStore("myStore"),
                  Serdes.String(),
                  Serdes.Integer()
              )
              .withLoggingDisabled() // Changelog is not supported by MockProcessorContext.
              .build();
      store.init(context, store);
      context.register(store, /*deprecated parameter*/ false, /*parameter unused in mock*/ null);
      

      Verifying punctuators

      Processors can schedule punctuators to handle periodic tasks. The mock context does not automatically execute punctuators, but it does capture them to allow you to unit test them as well:

      final MockProcessorContext.CapturedPunctuator capturedPunctuator = context.scheduledPunctuators().get(0);
      final long interval = capturedPunctuator.getIntervalMs();
      final PunctuationType type = capturedPunctuator.getType();
      final boolean cancelled = capturedPunctuator.cancelled();
      final Punctuator punctuator = capturedPunctuator.getPunctuator();
      punctuator.punctuate(/*timestamp*/ 0L);
      

      If you need to write tests involving automatic firing of scheduled punctuators, we recommend creating a simple topology with your processor and using the TopologyTestDriver.

      Previous Next

      9.7.8 - Interactive Queries

      Interactive Queries

      Interactive queries allow you to leverage the state of your application from outside your application. The Kafka Streams enables your applications to be queryable.

      Table of Contents

      • Querying local state stores for an app instance
        • Querying local key-value stores
        • Querying local window stores
        • Querying local custom state stores
      • Querying remote state stores for the entire app
        • Adding an RPC layer to your application
        • Exposing the RPC endpoints of your application
        • Discovering and accessing application instances and their local state stores
      • Demo applications

      The full state of your application is typically split across many distributed instances of your application, and across many state stores that are managed locally by these application instances.

      There are local and remote components to interactively querying the state of your application.

      Local state An application instance can query the locally managed portion of the state and directly query its own local state stores. You can use the corresponding local data in other parts of your application code, as long as it doesn’t require calling the Kafka Streams API. Querying state stores is always read-only to guarantee that the underlying state stores will never be mutated out-of-band (e.g., you cannot add new entries). State stores should only be mutated by the corresponding processor topology and the input data it operates on. For more information, see Querying local state stores for an app instance. Remote state

      To query the full state of your application, you must connect the various fragments of the state, including:

      • query local state stores
      • discover all running instances of your application in the network and their state stores
      • communicate with these instances over the network (e.g., an RPC layer)

      Connecting these fragments enables communication between instances of the same app and communication from other applications for interactive queries. For more information, see Querying remote state stores for the entire app.

      Kafka Streams natively provides all of the required functionality for interactively querying the state of your application, except if you want to expose the full state of your application via interactive queries. To allow application instances to communicate over the network, you must add a Remote Procedure Call (RPC) layer to your application (e.g., REST API).

      This table shows the Kafka Streams native communication support for various procedures.

      ProcedureApplication instanceEntire application
      Query local state stores of an app instanceSupportedSupported
      Make an app instance discoverable to othersSupportedSupported
      Discover all running app instances and their state storesSupportedSupported
      Communicate with app instances over the network (RPC)SupportedNot supported (you must configure)

      Querying local state stores for an app instance

      A Kafka Streams application typically runs on multiple instances. The state that is locally available on any given instance is only a subset of the application’s entire state. Querying the local stores on an instance will only return data locally available on that particular instance.

      The method KafkaStreams#store(...) finds an application instance’s local state stores by name and type. Note that interactive queries are not supported for versioned state stores at this time.

      Every application instance can directly query any of its local state stores.

      The name of a state store is defined when you create the store. You can create the store explicitly by using the Processor API or implicitly by using stateful operations in the DSL.

      The type of a state store is defined by QueryableStoreType. You can access the built-in types via the class QueryableStoreTypes. Kafka Streams currently has two built-in types:

      • A key-value store QueryableStoreTypes#keyValueStore(), see Querying local key-value stores.
      • A window store QueryableStoreTypes#windowStore(), see Querying local window stores.

      You can also implement your own QueryableStoreType as described in section Querying local custom state stores.

      Note

      Kafka Streams materializes one state store per stream partition. This means your application will potentially manage many underlying state stores. The API enables you to query all of the underlying stores without having to know which partition the data is in.

      Querying local key-value stores

      To query a local key-value store, you must first create a topology with a key-value store. This example creates a key-value store named “CountsKeyValueStore”. This store will hold the latest count for any word that is found on the topic “word-count-input”.

      Properties  props = ...;
      StreamsBuilder builder = ...;
      KStream<String, String> textLines = ...;
      
      // Define the processing topology (here: WordCount)
      KGroupedStream<String, String> groupedByWord = textLines
        .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\W+")))
        .groupBy((key, word) -> word, Grouped.with(stringSerde, stringSerde));
      
      // Create a key-value store named "CountsKeyValueStore" for the all-time word counts
      groupedByWord.count(Materialized.<String, String, KeyValueStore<Bytes, byte[]>as("CountsKeyValueStore"));
      
      // Start an instance of the topology
      KafkaStreams streams = new KafkaStreams(builder, props);
      streams.start();
      

      After the application has started, you can get access to “CountsKeyValueStore” and then query it via the ReadOnlyKeyValueStore API:

      // Get the key-value store CountsKeyValueStore
      ReadOnlyKeyValueStore<String, Long> keyValueStore =
          streams.store("CountsKeyValueStore", QueryableStoreTypes.keyValueStore());
      
      // Get value by key
      System.out.println("count for hello:" + keyValueStore.get("hello"));
      
      // Get the values for a range of keys available in this application instance
      KeyValueIterator<String, Long> range = keyValueStore.range("all", "streams");
      while (range.hasNext()) {
        KeyValue<String, Long> next = range.next();
        System.out.println("count for " + next.key + ": " + next.value);
      }
      
      // Get the values for all of the keys available in this application instance
      KeyValueIterator<String, Long> range = keyValueStore.all();
      while (range.hasNext()) {
        KeyValue<String, Long> next = range.next();
        System.out.println("count for " + next.key + ": " + next.value);
      }
      

      You can also materialize the results of stateless operators by using the overloaded methods that take a queryableStoreName as shown in the example below:

      StreamsBuilder builder = ...;
      KTable<String, Integer> regionCounts = ...;
      
      // materialize the result of filtering corresponding to odd numbers
      // the "queryableStoreName" can be subsequently queried.
      KTable<String, Integer> oddCounts = numberLines.filter((region, count) -> (count % 2 != 0),
        Materialized.<String, Integer, KeyValueStore<Bytes, byte[]>as("queryableStoreName"));
      
      // do not materialize the result of filtering corresponding to even numbers
      // this means that these results will not be materialized and cannot be queried.
      KTable<String, Integer> oddCounts = numberLines.filter((region, count) -> (count % 2 == 0));
      

      Querying local window stores

      A window store will potentially have many results for any given key because the key can be present in multiple windows. However, there is only one result per window for a given key.

      To query a local window store, you must first create a topology with a window store. This example creates a window store named “CountsWindowStore” that contains the counts for words in 1-minute windows.

      StreamsBuilder builder = ...;
      KStream<String, String> textLines = ...;
      
      // Define the processing topology (here: WordCount)
      KGroupedStream<String, String> groupedByWord = textLines
        .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\W+")))
        .groupBy((key, word) -> word, Grouped.with(stringSerde, stringSerde));
      
      // Create a window state store named "CountsWindowStore" that contains the word counts for every minute
      groupedByWord.windowedBy(TimeWindows.ofSizeWithNoGrace(Duration.ofSeconds(60)))
        .count(Materialized.<String, Long, WindowStore<Bytes, byte[]>as("CountsWindowStore"));
      

      After the application has started, you can get access to “CountsWindowStore” and then query it via the ReadOnlyWindowStore API:

      // Get the window store named "CountsWindowStore"
      ReadOnlyWindowStore<String, Long> windowStore =
          streams.store("CountsWindowStore", QueryableStoreTypes.windowStore());
      
      // Fetch values for the key "world" for all of the windows available in this application instance.
      // To get *all* available windows we fetch windows from the beginning of time until now.
      Instant timeFrom = Instant.ofEpochMilli(0); // beginning of time = oldest available
      Instant timeTo = Instant.now(); // now (in processing-time)
      WindowStoreIterator<Long> iterator = windowStore.fetch("world", timeFrom, timeTo);
      while (iterator.hasNext()) {
        KeyValue<Long, Long> next = iterator.next();
        long windowTimestamp = next.key;
        System.out.println("Count of 'world' @ time " + windowTimestamp + " is " + next.value);
      }
      

      Querying local custom state stores

      Note

      Only the Processor API supports custom state stores.

      Before querying the custom state stores you must implement these interfaces:

      • Your custom state store must implement StateStore.
      • You must have an interface to represent the operations available on the store.
      • You must provide an implementation of StoreBuilder for creating instances of your store.
      • It is recommended that you provide an interface that restricts access to read-only operations. This prevents users of this API from mutating the state of your running Kafka Streams application out-of-band.

      The class/interface hierarchy for your custom store might look something like:

      public class MyCustomStore<K,V> implements StateStore, MyWriteableCustomStore<K,V> {
        // implementation of the actual store
      }
      
      // Read-write interface for MyCustomStore
      public interface MyWriteableCustomStore<K,V> extends MyReadableCustomStore<K,V> {
        void write(K Key, V value);
      }
      
      // Read-only interface for MyCustomStore
      public interface MyReadableCustomStore<K,V> {
        V read(K key);
      }
      
      public class MyCustomStoreBuilder implements StoreBuilder {
        // implementation of the supplier for MyCustomStore
      }
      

      To make this store queryable you must:

      • Provide an implementation of QueryableStoreType.
      • Provide a wrapper class that has access to all of the underlying instances of the store and is used for querying.

      Here is how to implement QueryableStoreType:

      public class MyCustomStoreType<K,V> implements QueryableStoreType<MyReadableCustomStore<K,V>> {
      
        // Only accept StateStores that are of type MyCustomStore
        public boolean accepts(final StateStore stateStore) {
          return stateStore instanceOf MyCustomStore;
        }
      
        public MyReadableCustomStore<K,V> create(final StateStoreProvider storeProvider, final String storeName) {
            return new MyCustomStoreTypeWrapper(storeProvider, storeName, this);
        }
      
      }
      

      A wrapper class is required because each instance of a Kafka Streams application may run multiple stream tasks and manage multiple local instances of a particular state store. The wrapper class hides this complexity and lets you query a “logical” state store by name without having to know about all of the underlying local instances of that state store.

      When implementing your wrapper class you must use the StateStoreProvider interface to get access to the underlying instances of your store. StateStoreProvider#stores(String storeName, QueryableStoreType<T> queryableStoreType) returns a List of state stores with the given storeName and of the type as defined by queryableStoreType.

      Here is an example implementation of the wrapper:

      // We strongly recommended implementing a read-only interface
      // to restrict usage of the store to safe read operations!
      public class MyCustomStoreTypeWrapper<K,V> implements MyReadableCustomStore<K,V> {
      
        private final QueryableStoreType<MyReadableCustomStore<K, V>> customStoreType;
        private final String storeName;
        private final StateStoreProvider provider;
      
        public CustomStoreTypeWrapper(final StateStoreProvider provider,
                                    final String storeName,
                                    final QueryableStoreType<MyReadableCustomStore<K, V>> customStoreType) {
      
          // ... assign fields ...
        }
      
        // Implement a safe read method
        @Override
        public V read(final K key) {
          // Get all the stores with storeName and of customStoreType
          final List<MyReadableCustomStore<K, V>> stores = provider.getStores(storeName, customStoreType);
          // Try and find the value for the given key
          final Optional<V> value = stores.stream().filter(store -> store.read(key) != null).findFirst();
          // Return the value if it exists
          return value.orElse(null);
        }
      
      }
      

      You can now find and query your custom store:

      Topology topology = ...;
      ProcessorSupplier processorSuppler = ...;
      
      // Create CustomStoreSupplier for store name the-custom-store
      MyCustomStoreBuilder customStoreBuilder = new MyCustomStoreBuilder("the-custom-store") //...;
      // Add the source topic
      topology.addSource("input", "inputTopic");
      // Add a custom processor that reads from the source topic
      topology.addProcessor("the-processor", processorSupplier, "input");
      // Connect your custom state store to the custom processor above
      topology.addStateStore(customStoreBuilder, "the-processor");
      
      KafkaStreams streams = new KafkaStreams(topology, config);
      streams.start();
      
      // Get access to the custom store
      MyReadableCustomStore<String,String> store = streams.store("the-custom-store", new MyCustomStoreType<String,String>());
      // Query the store
      String value = store.read("key");
      

      Querying remote state stores for the entire app

      To query remote states for the entire app, you must expose the application’s full state to other applications, including applications that are running on different machines.

      For example, you have a Kafka Streams application that processes user events in a multi-player video game, and you want to retrieve the latest status of each user directly and display it in a mobile app. Here are the required steps to make the full state of your application queryable:

      1. Add an RPC layer to your application so that the instances of your application can be interacted with via the network (e.g., a REST API, Thrift, a custom protocol, and so on). The instances must respond to interactive queries. You can follow the reference examples provided to get started.
      2. Expose the RPC endpoints of your application’s instances via the application.server configuration setting of Kafka Streams. Because RPC endpoints must be unique within a network, each instance has its own value for this configuration setting. This makes an application instance discoverable by other instances.
      3. In the RPC layer, discover remote application instances and their state stores and query locally available state stores to make the full state of your application queryable. The remote application instances can forward queries to other app instances if a particular instance lacks the local data to respond to a query. The locally available state stores can directly respond to queries.

      Discover any running instances of the same application as well as the respective RPC endpoints they expose for interactive queries

      Adding an RPC layer to your application

      There are many ways to add an RPC layer. The only requirements are that the RPC layer is embedded within the Kafka Streams application and that it exposes an endpoint that other application instances and applications can connect to.

      Exposing the RPC endpoints of your application

      To enable remote state store discovery in a distributed Kafka Streams application, you must set the configuration property in the config properties. The application.server property defines a unique host:port pair that points to the RPC endpoint of the respective instance of a Kafka Streams application. The value of this configuration property will vary across the instances of your application. When this property is set, Kafka Streams will keep track of the RPC endpoint information for every instance of an application, its state stores, and assigned stream partitions through instances of StreamsMetadata.

      Tip

      Consider leveraging the exposed RPC endpoints of your application for further functionality, such as piggybacking additional inter-application communication that goes beyond interactive queries.

      This example shows how to configure and run a Kafka Streams application that supports the discovery of its state stores.

      Properties props = new Properties();
      // Set the unique RPC endpoint of this application instance through which it
      // can be interactively queried.  In a real application, the value would most
      // probably not be hardcoded but derived dynamically.
      String rpcEndpoint = "host1:4460";
      props.put(StreamsConfig.APPLICATION_SERVER_CONFIG, rpcEndpoint);
      // ... further settings may follow here ...
      
      StreamsBuilder builder = new StreamsBuilder();
      
      KStream<String, String> textLines = builder.stream(stringSerde, stringSerde, "word-count-input");
      
      final KGroupedStream<String, String> groupedByWord = textLines
          .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\W+")))
          .groupBy((key, word) -> word, Grouped.with(stringSerde, stringSerde));
      
      // This call to `count()` creates a state store named "word-count".
      // The state store is discoverable and can be queried interactively.
      groupedByWord.count(Materialized.<String, Long, KeyValueStore<Bytes, byte[]>as("word-count"));
      
      // Start an instance of the topology
      KafkaStreams streams = new KafkaStreams(builder, props);
      streams.start();
      
      // Then, create and start the actual RPC service for remote access to this
      // application instance's local state stores.
      //
      // This service should be started on the same host and port as defined above by
      // the property `StreamsConfig.APPLICATION_SERVER_CONFIG`.  The example below is
      // fictitious, but we provide end-to-end demo applications (such as KafkaMusicExample)
      // that showcase how to implement such a service to get you started.
      MyRPCService rpcService = ...;
      rpcService.listenAt(rpcEndpoint);
      

      Discovering and accessing application instances and their local state stores

      The following methods return StreamsMetadata objects, which provide meta-information about application instances such as their RPC endpoint and locally available state stores.

      • KafkaStreams#allMetadata(): find all instances of this application
      • KafkaStreams#allMetadataForStore(String storeName): find those applications instances that manage local instances of the state store “storeName”
      • KafkaStreams#metadataForKey(String storeName, K key, Serializer<K> keySerializer): using the default stream partitioning strategy, find the one application instance that holds the data for the given key in the given state store
      • KafkaStreams#metadataForKey(String storeName, K key, StreamPartitioner<K, ?> partitioner): using partitioner, find the one application instance that holds the data for the given key in the given state store

      Attention

      If application.server is not configured for an application instance, then the above methods will not find any StreamsMetadata for it.

      For example, we can now find the StreamsMetadata for the state store named “word-count” that we defined in the code example shown in the previous section:

      KafkaStreams streams = ...;
      // Find all the locations of local instances of the state store named "word-count"
      Collection<StreamsMetadata> wordCountHosts = streams.allMetadataForStore("word-count");
      
      // For illustrative purposes, we assume using an HTTP client to talk to remote app instances.
      HttpClient http = ...;
      
      // Get the word count for word (aka key) 'alice': Approach 1
      //
      // We first find the one app instance that manages the count for 'alice' in its local state stores.
      StreamsMetadata metadata = streams.metadataForKey("word-count", "alice", Serdes.String().serializer());
      // Then, we query only that single app instance for the latest count of 'alice'.
      // Note: The RPC URL shown below is fictitious and only serves to illustrate the idea.  Ultimately,
      // the URL (or, in general, the method of communication) will depend on the RPC layer you opted to
      // implement.  Again, we provide end-to-end demo applications (such as KafkaMusicExample) that showcase
      // how to implement such an RPC layer.
      Long result = http.getLong("http://" + metadata.host() + ":" + metadata.port() + "/word-count/alice");
      
      // Get the word count for word (aka key) 'alice': Approach 2
      //
      // Alternatively, we could also choose (say) a brute-force approach where we query every app instance
      // until we find the one that happens to know about 'alice'.
      Optional<Long> result = streams.allMetadataForStore("word-count")
          .stream()
          .map(streamsMetadata -> {
              // Construct the (fictituous) full endpoint URL to query the current remote application instance
              String url = "http://" + streamsMetadata.host() + ":" + streamsMetadata.port() + "/word-count/alice";
              // Read and return the count for 'alice', if any.
              return http.getLong(url);
          })
          .filter(s -> s != null)
          .findFirst();
      

      At this point the full state of the application is interactively queryable:

      • You can discover the running instances of the application and the state stores they manage locally.
      • Through the RPC layer that was added to the application, you can communicate with these application instances over the network and query them for locally available state.
      • The application instances are able to serve such queries because they can directly query their own local state stores and respond via the RPC layer.
      • Collectively, this allows us to query the full state of the entire application.

      To see an end-to-end application with interactive queries, review the demo applications.

      Previous Next

      9.7.9 - Memory Management

      Memory Management

      You can specify the total memory (RAM) size used for internal caching and compacting of records. This caching happens before the records are written to state stores or forwarded downstream to other nodes.

      The record caches are implemented slightly different in the DSL and Processor API.

      Table of Contents

      • Record caches in the DSL
      • Record caches in the Processor API
      • RocksDB
      • Other memory usage

      Record caches in the DSL

      You can specify the total memory (RAM) size of the record cache for an instance of the processing topology. It is leveraged by the following KTable instances:

      • Source KTable: KTable instances that are created via StreamsBuilder#table() or StreamsBuilder#globalTable().
      • Aggregation KTable: instances of KTable that are created as a result of aggregations.

      For such KTable instances, the record cache is used for:

      • Internal caching and compacting of output records before they are written by the underlying stateful processor node to its internal state stores.
      • Internal caching and compacting of output records before they are forwarded from the underlying stateful processor node to any of its downstream processor nodes.

      Use the following example to understand the behaviors with and without record caching. In this example, the input is a KStream<String, Integer> with the records <K,V>: <A, 1>, <D, 5>, <A, 20>, <A, 300>. The focus in this example is on the records with key == A.

      • An aggregation computes the sum of record values, grouped by key, for the input and returns a KTable<String, Integer>.
      * **Without caching** : a sequence of output records is emitted for key `A` that represent changes in the resulting aggregation table. The parentheses (`()`) denote changes, the left number is the new aggregate value and the right number is the old aggregate value: `<A, (1, null)>, <A, (21, 1)>, <A, (321, 21)>`.
      * **With caching** : a single output record is emitted for key `A` that would likely be compacted in the cache, leading to a single output record of `<A, (321, null)>`. This record is written to the aggregation's internal state store and forwarded to any downstream operations.
      

      The cache size is specified through the cache.max.bytes.buffering parameter, which is a global setting per processing topology:

      // Enable record cache of size 10 MB.
      Properties props = new Properties();
      props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 10 * 1024 * 1024L);
      

      This parameter controls the number of bytes allocated for caching. Specifically, for a processor topology instance with T threads and C bytes allocated for caching, each thread will have an even C/T bytes to construct its own cache and use as it sees fit among its tasks. This means that there are as many caches as there are threads, but no sharing of caches across threads happens.

      The basic API for the cache is made of put() and get() calls. Records are evicted using a simple LRU scheme after the cache size is reached. The first time a keyed record R1 = <K1, V1> finishes processing at a node, it is marked as dirty in the cache. Any other keyed record R2 = <K1, V2> with the same key K1 that is processed on that node during that time will overwrite <K1, V1>, this is referred to as “being compacted”. This has the same effect as Kafka’s log compaction, but happens earlier, while the records are still in memory, and within your client-side application, rather than on the server-side (i.e. the Kafka broker). After flushing, R2 is forwarded to the next processing node and then written to the local state store.

      The semantics of caching is that data is flushed to the state store and forwarded to the next downstream processor node whenever the earliest of commit.interval.ms or cache.max.bytes.buffering (cache pressure) hits. Both commit.interval.ms and cache.max.bytes.buffering are global parameters. As such, it is not possible to specify different parameters for individual nodes.

      Here are example settings for both parameters based on desired scenarios.

      • To turn off caching the cache size can be set to zero:

        // Disable record cache
        

        Properties props = new Properties(); props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);

      • To enable caching but still have an upper bound on how long records will be cached, you can set the commit interval. In this example, it is set to 1000 milliseconds:

        Properties props = new Properties();
        

        // Enable record cache of size 10 MB. props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 10 * 1024 * 1024L); // Set commit interval to 1 second. props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000);

      The effect of these two configurations is described in the figure below. The records are shown using 4 keys: blue, red, yellow, and green. Assume the cache has space for only 3 keys.

      • When the cache is disabled (a), all of the input records will be output.

      • When the cache is enabled (b):

      * Most records are output at the end of commit intervals (e.g., at `t1` a single blue record is output, which is the final over-write of the blue key up to that time).
      * Some records are output because of cache pressure (i.e. before the end of a commit interval). For example, see the red record before `t2`. With smaller cache sizes we expect cache pressure to be the primary factor that dictates when records are output. With large cache sizes, the commit interval will be the primary factor.
      * The total number of records output has been reduced from 15 to 8.
      

      Record caches in the Processor API

      You can specify the total memory (RAM) size of the record cache for an instance of the processing topology. It is used for internal caching and compacting of output records before they are written from a stateful processor node to its state stores.

      The record cache in the Processor API does not cache or compact any output records that are being forwarded downstream. This means that all downstream processor nodes can see all records, whereas the state stores see a reduced number of records. This does not impact correctness of the system, but is a performance optimization for the state stores. For example, with the Processor API you can store a record in a state store while forwarding a different value downstream.

      Following from the example first shown in section State Stores, to disable caching, you can add the withCachingDisabled call (note that caches are enabled by default, however there is an explicit withCachingEnabled call).

      StoreBuilder countStoreBuilder =
        Stores.keyValueStoreBuilder(
          Stores.persistentKeyValueStore("Counts"),
          Serdes.String(),
          Serdes.Long())
        .withCachingEnabled();
      

      Record caches are not supported for versioned state stores.

      To avoid reading stale data, you can flush() the store before creating the iterator. Note, that flushing too often can lead to performance degration if RocksDB is used, so we advice to avoid flushing manually in general.

      RocksDB

      Each instance of RocksDB allocates off-heap memory for a block cache, index and filter blocks, and memtable (write buffer). Critical configs (for RocksDB version 4.1.0) include block_cache_size, write_buffer_size and max_write_buffer_number. These can be specified through the rocksdb.config.setter configuration.

      Also, we recommend changing RocksDB’s default memory allocator, because the default allocator may lead to increased memory consumption. To change the memory allocator to jemalloc, you need to set the environment variable LD_PRELOADbefore you start your Kafka Streams application:

      # example: install jemalloc (on Debian)
      $ apt install -y libjemalloc-dev
      # set LD_PRELOAD before you start your Kafka Streams application
      $ export LD_PRELOAD="/usr/lib/x86_64-linux-gnu/libjemalloc.so"
      

      As of 2.3.0 the memory usage across all instances can be bounded, limiting the total off-heap memory of your Kafka Streams application. To do so you must configure RocksDB to cache the index and filter blocks in the block cache, limit the memtable memory through a shared WriteBufferManager and count its memory against the block cache, and then pass the same Cache object to each instance. See RocksDB Memory Usage for details. An example RocksDBConfigSetter implementing this is shown below:

      public static class BoundedMemoryRocksDBConfig implements RocksDBConfigSetter {
      
         private static org.rocksdb.Cache cache = new org.rocksdb.LRUCache(TOTAL_OFF_HEAP_MEMORY, -1, false, INDEX_FILTER_BLOCK_RATIO);1
         private static org.rocksdb.WriteBufferManager writeBufferManager = new org.rocksdb.WriteBufferManager(TOTAL_MEMTABLE_MEMORY, cache);
      
         @Override
         public void setConfig(final String storeName, final Options options, final Map<String, Object> configs) {
      
           BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
      
            // These three options in combination will limit the memory used by RocksDB to the size passed to the block cache (TOTAL_OFF_HEAP_MEMORY)
           tableConfig.setBlockCache(cache);
           tableConfig.setCacheIndexAndFilterBlocks(true);
           options.setWriteBufferManager(writeBufferManager);
      
            // These options are recommended to be set when bounding the total memory
           tableConfig.setCacheIndexAndFilterBlocksWithHighPriority(true);2
           tableConfig.setPinTopLevelIndexAndFilter(true);
           tableConfig.setBlockSize(BLOCK_SIZE);3
           options.setMaxWriteBufferNumber(N_MEMTABLES);
           options.setWriteBufferSize(MEMTABLE_SIZE);
      
           options.setTableFormatConfig(tableConfig);
         }
      
         @Override
         public void close(final String storeName, final Options options) {
           // Cache and WriteBufferManager should not be closed here, as the same objects are shared by every store instance.
         }
      }
      

      1. INDEX_FILTER_BLOCK_RATIO can be used to set a fraction of the block cache to set aside for “high priority” (aka index and filter) blocks, preventing them from being evicted by data blocks. The boolean parameter in the cache constructor lets you control whether the cache should enforce a strict memory limit by failing the read or iteration in the rare cases where it might go larger than its capacity. See the full signature of the LRUCache constructor here.
      2. This must be set in order for INDEX_FILTER_BLOCK_RATIO to take effect (see footnote 1) as described in the RocksDB docs
      3. You may want to modify the default block size per these instructions from the RocksDB docs. A larger block size means index blocks will be smaller, but the cached data blocks may contain more cold data that would otherwise be evicted.

      Note: While we recommend setting at least the above configs, the specific options that yield the best performance are workload dependent and you should consider experimenting with these to determine the best choices for your specific use case. Keep in mind that the optimal configs for one app may not apply to one with a different topology or input topic. In addition to the recommended configs above, you may want to consider using partitioned index filters as described by the RocksDB docs.

      Other memory usage

      There are other modules inside Apache Kafka that allocate memory during runtime. They include the following:

      • Producer buffering, managed by the producer config buffer.memory.
      • Consumer buffering, currently not strictly managed, but can be indirectly controlled by fetch size, i.e., fetch.max.bytes and fetch.max.wait.ms.
      • Both producer and consumer also have separate TCP send / receive buffers that are not counted as the buffering memory. These are controlled by the send.buffer.bytes / receive.buffer.bytes configs.
      • Deserialized objects buffering: after consumer.poll() returns records, they will be deserialized to extract timestamp and buffered in the streams space. Currently this is only indirectly controlled by buffered.records.per.partition.

      Tip

      Iterators should be closed explicitly to release resources: Store iterators (e.g., KeyValueIterator and WindowStoreIterator) must be closed explicitly upon completeness to release resources such as open file handlers and in-memory read buffers, or use try-with-resources statement (available since JDK7) for this Closeable class.

      Otherwise, stream application’s memory usage keeps increasing when running until it hits an OOM.

      Previous Next

      9.7.10 - Running Streams Applications

      Running Streams Applications

      You can run Java applications that use the Kafka Streams library without any additional configuration or requirements. Kafka Streams also provides the ability to receive notification of the various states of the application. The ability to monitor the runtime status is discussed in the monitoring guide.

      Table of Contents

      • Starting a Kafka Streams application
      • Elastic scaling of your application
        • Adding capacity to your application
        • Removing capacity from your application
        • State restoration during workload rebalance
        • Determining how many application instances to run

      Starting a Kafka Streams application

      You can package your Java application as a fat JAR file and then start the application like this:

      # Start the application in class `com.example.MyStreamsApp`
      # from the fat JAR named `path-to-app-fatjar.jar`.
      $ java -cp path-to-app-fatjar.jar com.example.MyStreamsApp
      

      When you start your application you are launching a Kafka Streams instance of your application. You can run multiple instances of your application. A common scenario is that there are multiple instances of your application running in parallel. For more information, see Parallelism Model.

      When the application instance starts running, the defined processor topology will be initialized as one or more stream tasks. If the processor topology defines any state stores, these are also constructed during the initialization period. For more information, see the State restoration during workload rebalance section).

      Elastic scaling of your application

      Kafka Streams makes your stream processing applications elastic and scalable. You can add and remove processing capacity dynamically during application runtime without any downtime or data loss. This makes your applications resilient in the face of failures and for allows you to perform maintenance as needed (e.g. rolling upgrades).

      For more information about this elasticity, see the Parallelism Model section. Kafka Streams leverages the Kafka group management functionality, which is built right into the Kafka wire protocol. It is the foundation that enables the elasticity of Kafka Streams applications: members of a group coordinate and collaborate jointly on the consumption and processing of data in Kafka. Additionally, Kafka Streams provides stateful processing and allows for fault-tolerant state in environments where application instances may come and go at any time.

      Adding capacity to your application

      If you need more processing capacity for your stream processing application, you can simply start another instance of your stream processing application, e.g. on another machine, in order to scale out. The instances of your application will become aware of each other and automatically begin to share the processing work. More specifically, what will be handed over from the existing instances to the new instances is (some of) the stream tasks that have been run by the existing instances. Moving stream tasks from one instance to another results in moving the processing work plus any internal state of these stream tasks (the state of a stream task will be re-created in the target instance by restoring the state from its corresponding changelog topic).

      The various instances of your application each run in their own JVM process, which means that each instance can leverage all the processing capacity that is available to their respective JVM process (minus the capacity that any non-Kafka-Streams part of your application may be using). This explains why running additional instances will grant your application additional processing capacity. The exact capacity you will be adding by running a new instance depends of course on the environment in which the new instance runs: available CPU cores, available main memory and Java heap space, local storage, network bandwidth, and so on. Similarly, if you stop any of the running instances of your application, then you are removing and freeing up the respective processing capacity.

      Before adding capacity: only a single instance of your Kafka Streams application is running. At this point the corresponding Kafka consumer group of your application contains only a single member (this instance). All data is being read and processed by this single instance.

      After adding capacity: now two additional instances of your Kafka Streams application are running, and they have automatically joined the application’s Kafka consumer group for a total of three current members. These three instances are automatically splitting the processing work between each other. The splitting is based on the Kafka topic partitions from which data is being read.

      Removing capacity from your application

      To remove processing capacity, you can stop running stream processing application instances (e.g., shut down two of the four instances), it will automatically leave the application’s consumer group, and the remaining instances of your application will automatically take over the processing work. The remaining instances take over the stream tasks that were run by the stopped instances. Moving stream tasks from one instance to another results in moving the processing work plus any internal state of these stream tasks. The state of a stream task is recreated in the target instance from its changelog topic.

      State restoration during workload rebalance

      When a task is migrated, the task processing state is fully restored before the application instance resumes processing. This guarantees the correct processing results. In Kafka Streams, state restoration is usually done by replaying the corresponding changelog topic to reconstruct the state store. To minimize changelog-based restoration latency by using replicated local state stores, you can specify num.standby.replicas. When a stream task is initialized or re-initialized on the application instance, its state store is restored like this:

      • If no local state store exists, the changelog is replayed from the earliest to the current offset. This reconstructs the local state store to the most recent snapshot.
      • If a local state store exists, the changelog is replayed from the previously checkpointed offset. The changes are applied and the state is restored to the most recent snapshot. This method takes less time because it is applying a smaller portion of the changelog.

      For more information, see Standby Replicas.

      As of version 2.6, Streams will now do most of a task’s restoration in the background through warmup replicas. These will be assigned to instances that need to restore a lot of state for a task. A stateful active task will only be assigned to an instance once its state is within the configured acceptable.recovery.lag, if one exists. This means that most of the time, a task migration will not result in downtime for that task. It will remain active on the instance that’s already caught up, while the instance that it’s being migrated to works on restoring the state. Streams will regularly probe for warmup tasks that have finished restoring and transition them to active tasks when ready.

      Note, the one exception to this task availability is if none of the instances have a caught up version of that task. In that case, we have no choice but to assign the active task to an instance that is not caught up and will have to block further processing on restoration of the task’s state from the changelog. If high availability is important for your application, you are highly recommended to enable standbys.

      Determining how many application instances to run

      The parallelism of a Kafka Streams application is primarily determined by how many partitions the input topics have. For example, if your application reads from a single topic that has ten partitions, then you can run up to ten instances of your applications. You can run further instances, but these will be idle.

      The number of topic partitions is the upper limit for the parallelism of your Kafka Streams application and for the number of running instances of your application.

      To achieve balanced workload processing across application instances and to prevent processing hotpots, you should distribute data and processing workloads:

      • Data should be equally distributed across topic partitions. For example, if two topic partitions each have 1 million messages, this is better than a single partition with 2 million messages and none in the other.
      • Processing workload should be equally distributed across topic partitions. For example, if the time to process messages varies widely, then it is better to spread the processing-intensive messages across partitions rather than storing these messages within the same partition.

      Previous Next

      9.7.11 - Managing Streams Application Topics

      Managing Streams Application Topics

      A Kafka Streams application continuously reads from Kafka topics, processes the read data, and then writes the processing results back into Kafka topics. The application may also auto-create other Kafka topics in the Kafka brokers, for example state store changelogs topics. This section describes the differences these topic types and how to manage the topics and your applications.

      Kafka Streams distinguishes between user topics and internal topics.

      User topics

      User topics exist externally to an application and are read from or written to by the application, including:

      Input topics Topics that are specified via source processors in the application’s topology; e.g. via StreamsBuilder#stream(), StreamsBuilder#table() and Topology#addSource(). Output topics Topics that are specified via sink processors in the application’s topology; e.g. via KStream#to(), KTable.to() and Topology#addSink().

      User topics must be created and manually managed ahead of time (e.g., via the topic tools). If user topics are shared among multiple applications for reading and writing, the application users must coordinate topic management. If user topics are centrally managed, then application users then would not need to manage topics themselves but simply obtain access to them.

      Note

      You should not use the auto-create topic feature on the brokers to create user topics, because:

      • Auto-creation of topics may be disabled in your Kafka cluster.
      • Auto-creation automatically applies the default topic settings such as the replicaton factor. These default settings might not be what you want for certain output topics (e.g., auto.create.topics.enable=true in the Kafka broker configuration).

      Internal topics

      Internal topics are used internally by the Kafka Streams application while executing, for example the changelog topics for state stores. These topics are created by the application and are only used by that stream application.

      If security is enabled on the Kafka brokers, you must grant the underlying clients admin permissions so that they can create internal topics set. For more information, see Streams Security.

      Note

      The internal topics follow the naming convention <application.id>-<operatorName>-<suffix>, but this convention is not guaranteed for future releases.

      The following settings apply to the default configuration for internal topics:

      • For all internal topics, message.timestamp.type is set to CreateTime.
      • For internal repartition topics, the compaction policy is delete and the retention time is -1 (infinite).
      • For internal changelog topics for key-value stores, the compaction policy is compact.
      • For internal changelog topics for windowed key-value stores, the compaction policy is delete,compact. The retention time is set to 24 hours plus your setting for the windowed store.
      • For internal changelog topics for versioned state stores, the cleanup policy is compact, and min.compaction.lag.ms is set to 24 hours plus the store’s historyRetentionMs` value.

      Previous Next

      9.7.12 - Streams Security

      Streams Security

      Table of Contents

      • Required ACL setting for secure Kafka clusters
      • Security example

      Kafka Streams natively integrates with the Kafka’s security features and supports all of the client-side security features in Kafka. Streams leverages the Java Producer and Consumer API.

      To secure your Stream processing applications, configure the security settings in the corresponding Kafka producer and consumer clients, and then specify the corresponding configuration settings in your Kafka Streams application.

      Kafka supports cluster encryption and authentication, including a mix of authenticated and unauthenticated, and encrypted and non-encrypted clients. Using security is optional.

      Here a few relevant client-side security features:

      Encrypt data-in-transit between your applications and Kafka brokers You can enable the encryption of the client-server communication between your applications and the Kafka brokers. For example, you can configure your applications to always use encryption when reading and writing data to and from Kafka. This is critical when reading and writing data across security domains such as internal network, public internet, and partner networks. Client authentication You can enable client authentication for connections from your application to Kafka brokers. For example, you can define that only specific applications are allowed to connect to your Kafka cluster. Client authorization You can enable client authorization of read and write operations by your applications. For example, you can define that only specific applications are allowed to read from a Kafka topic. You can also restrict write access to Kafka topics to prevent data pollution or fraudulent activities.

      For more information about the security features in Apache Kafka, see Kafka Security.

      Required ACL setting for secure Kafka clusters

      Kafka clusters can use ACLs to control access to resources (like the ability to create topics), and for such clusters each client, including Kafka Streams, is required to authenticate as a particular user in order to be authorized with appropriate access. In particular, when Streams applications are run against a secured Kafka cluster, the principal running the application must have the ACL set so that the application has the permissions to create, read and write internal topics.

      To avoid providing this permission to your application, you can create the required internal topics manually. If the internal topics exist, Kafka Streams will not try to recreate them. Note, that the internal repartition and changelog topics must be created with the correct number of partitions–otherwise, Kafka Streams will fail on startup. The topics must be created with the same number of partitions as your input topic, or if there are multiple topics, the maximum number of partitions across all input topics. Additionally, changelog topics must be created with log compaction enabled–otherwise, your application might lose data. For changelog topics for windowed KTables, apply “delete,compact” and set the retention time based on the corresponding store retention time. To avoid premature deletion, add a delta to the store retention time. By default, Kafka Streams adds 24 hours to the store retention time. You can find out more about the names of the required internal topics via Topology#describe(). All internal topics follow the naming pattern <application.id>-<operatorName>-<suffix> where the suffix is either repartition or changelog. Note, that there is no guarantee about this naming pattern in future releases–it’s not part of the public API.

      Since all internal topics as well as the embedded consumer group name are prefixed with the application id, it is recommended to use ACLs on prefixed resource pattern to configure control lists to allow client to manage all topics and consumer groups started with this prefix as --resource-pattern-type prefixed --topic your.application.id --operation All (see KIP-277 and KIP-290 for details).

      Security example

      The purpose is to configure a Kafka Streams application to enable client authentication and encrypt data-in-transit when communicating with its Kafka cluster.

      This example assumes that the Kafka brokers in the cluster already have their security setup and that the necessary SSL certificates are available to the application in the local filesystem locations. For example, if you are using Docker then you must also include these SSL certificates in the correct locations within the Docker image.

      The snippet below shows the settings to enable client authentication and SSL encryption for data-in-transit between your Kafka Streams application and the Kafka cluster it is reading and writing from:

      # Essential security settings to enable client authentication and SSL encryption
      bootstrap.servers=kafka.example.com:9093
      security.protocol=SSL
      ssl.truststore.location=/etc/security/tls/kafka.client.truststore.jks
      ssl.truststore.password=test1234
      ssl.keystore.location=/etc/security/tls/kafka.client.keystore.jks
      ssl.keystore.password=test1234
      ssl.key.password=test1234
      

      Configure these settings in the application for your Properties instance. These settings will encrypt any data-in-transit that is being read from or written to Kafka, and your application will authenticate itself against the Kafka brokers that it is communicating with. Note that this example does not cover client authorization.

      // Code of your Java application that uses the Kafka Streams library
      Properties settings = new Properties();
      settings.put(StreamsConfig.APPLICATION_ID_CONFIG, "secure-kafka-streams-app");
      // Where to find secure Kafka brokers.  Here, it's on port 9093.
      settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka.example.com:9093");
      //
      // ...further non-security related settings may follow here...
      //
      // Security settings.
      // 1. These settings must match the security settings of the secure Kafka cluster.
      // 2. The SSL trust store and key store files must be locally accessible to the application.
      settings.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
      settings.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "/etc/security/tls/kafka.client.truststore.jks");
      settings.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "test1234");
      settings.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "/etc/security/tls/kafka.client.keystore.jks");
      settings.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "test1234");
      settings.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "test1234");
      

      If you incorrectly configure a security setting in your application, it will fail at runtime, typically right after you start it. For example, if you enter an incorrect password for the ssl.keystore.password setting, an error message similar to this would be logged and then the application would terminate:

      # Misconfigured ssl.keystore.password
      Exception in thread "main" org.apache.kafka.common.KafkaException: Failed to construct kafka producer
      [...snip...]
      Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException:
         java.io.IOException: Keystore was tampered with, or password was incorrect
      [...snip...]
      Caused by: java.security.UnrecoverableKeyException: Password verification failed
      

      Monitor your Kafka Streams application log files for such error messages to spot any misconfigured applications quickly.

      Previous Next

      9.7.13 - Application Reset Tool

      Application Reset Tool

      You can reset an application and force it to reprocess its data from scratch by using the application reset tool. This can be useful for development and testing, or when fixing bugs.

      The application reset tool handles the Kafka Streams user topics (input, and output) and internal topics differently when resetting the application.

      Here’s what the application reset tool does for each topic type:

      • Input topics: Reset offsets to specified position (by default to the beginning of the topic).
      • Internal topics: Delete the internal topic (this automatically deletes any committed offsets).

      The application reset tool does not:

      • Reset output topics of an application. If any output topics are consumed by downstream applications, it is your responsibility to adjust those downstream applications as appropriate when you reset the upstream application.
      • Reset the local environment of your application instances. It is your responsibility to delete the local state on any machine on which an application instance was run. See the instructions in section Step 2: Reset the local environments of your application instances on how to do this.

      Prerequisites

      • All instances of your application must be stopped. Otherwise, the application may enter an invalid state, crash, or produce incorrect results. You can verify whether the consumer group with ID application.id is still active by using bin/kafka-consumer-groups. When long session timeout has been configured, active members could take longer to get expired on the broker thus blocking the reset job to complete. Use the --force option could remove those left-over members immediately. Make sure to shut down all stream applications when this option is specified to avoid unexpected rebalances.

      • Use this tool with care and double-check its parameters: If you provide wrong parameter values (e.g., typos in application.id) or specify parameters inconsistently (e.g., specify the wrong input topics for the application), this tool might invalidate the application’s state or even impact other applications, consumer groups, or your Kafka topics.

      Step 1: Run the application reset tool

      Invoke the application reset tool from the command line

      Warning! This tool makes irreversible changes to your application. It is strongly recommended that you run this once with --dry-run to preview your changes before making them.

      $ bin/kafka-streams-application-reset
      

      The tool accepts the following parameters:

      Option (* = required)                 Description
      ---------------------                 -----------
      * --application-id <String: id>       The Kafka Streams application ID
                                              (application.id).
      --bootstrap-server <String: server to  REQUIRED unless --bootstrap-servers
                                  connect to>                            (deprecated) is specified. The server
                                               (s) to connect to. The broker list
                                               string in the form HOST1:PORT1,HOST2:
                                               PORT2.
      --by-duration <String: urls>          Reset offsets to offset by duration from
                                              current timestamp. Format: 'PnDTnHnMnS'
      --config-file <String: file name>     Property file containing configs to be
                                              passed to admin clients and embedded
                                              consumer.
      --dry-run                             Display the actions that would be
                                              performed without executing the reset
                                              commands.
      --from-file <String: urls>            Reset offsets to values defined in CSV
                                              file.
      --input-topics <String: list>         Comma-separated list of user input
                                              topics. For these topics, the tool will
                                              reset the offset to the earliest
                                              available offset.
      --internal-topics <String: list>      Comma-separated list of internal topics
                                              to delete. Must be a subset of the
                                              internal topics marked for deletion by
                                              the default behaviour (do a dry-run without
                                              this option to view these topics).
      --shift-by <Long: number-of-offsets>  Reset offsets shifting current offset by
                                              'n', where 'n' can be positive or
                                              negative
      --to-datetime <String>                Reset offsets to offset from datetime.
                                              Format: 'YYYY-MM-DDTHH:mm:SS.sss'
      --to-earliest                         Reset offsets to earliest offset.
      --to-latest                           Reset offsets to latest offset.
      --to-offset <Long>                    Reset offsets to a specific offset.
      --force                               Force removing members of the consumer group
                                            (intended to remove left-over members if
                                            long session timeout was configured).
      

      Consider the following as reset-offset scenarios for input-topics:

      • by-duration
      • from-file
      • shift-by
      • to-datetime
      • to-earliest
      • to-latest
      • to-offset

      Only one of these scenarios can be defined. If not, to-earliest will be executed by default

      All the other parameters can be combined as needed. For example, if you want to restart an application from an empty internal state, but not reprocess previous data, simply omit the parameter --input-topics.

      Step 2: Reset the local environments of your application instances

      For a complete application reset, you must delete the application’s local state directory on any machines where the application instance was run. You must do this before restarting an application instance on the same machine. You can use either of these methods:

      • The API method KafkaStreams#cleanUp() in your application code.
      • Manually delete the corresponding local state directory (default location: /${java.io.tmpdir}/kafka-streams/<application.id>). For more information, see Streams javadocs.

      Previous Next