You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
293 lines
17 KiB
293 lines
17 KiB
<!-- |
|
Licensed to the Apache Software Foundation (ASF) under one or more |
|
contributor license agreements. See the NOTICE file distributed with |
|
this work for additional information regarding copyright ownership. |
|
The ASF licenses this file to You under the Apache License, Version 2.0 |
|
(the "License"); you may not use this file except in compliance with |
|
the License. You may obtain a copy of the License at |
|
|
|
http://www.apache.org/licenses/LICENSE-2.0 |
|
|
|
Unless required by applicable law or agreed to in writing, software |
|
distributed under the License is distributed on an "AS IS" BASIS, |
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
|
See the License for the specific language governing permissions and |
|
limitations under the License. |
|
--> |
|
|
|
<script id="configuration-template" type="text/x-handlebars-template"> |
|
Kafka uses key-value pairs in the <a href="http://en.wikipedia.org/wiki/.properties">property file format</a> for configuration. These values can be supplied either from a file or programmatically. |
|
|
|
<h3><a id="brokerconfigs" href="#brokerconfigs">3.1 Broker Configs</a></h3> |
|
|
|
The essential configurations are the following: |
|
<ul> |
|
<li><code>broker.id</code> |
|
<li><code>log.dirs</code> |
|
<li><code>zookeeper.connect</code> |
|
</ul> |
|
|
|
Topic-level configurations and defaults are discussed in more detail <a href="#topicconfigs">below</a>. |
|
|
|
<!--#include virtual="generated/kafka_config.html" --> |
|
|
|
<p>More details about broker configuration can be found in the scala class <code>kafka.server.KafkaConfig</code>.</p> |
|
|
|
<h4><a id="dynamicbrokerconfigs" href="#dynamicbrokerconfigs">3.1.1 Updating Broker Configs</a></h4> |
|
From Kafka version 1.1 onwards, some of the broker configs can be updated without restarting the broker. See the |
|
<code>Dynamic Update Mode</code> column in <a href="#brokerconfigs">Broker Configs</a> for the update mode of each broker config. |
|
<ul> |
|
<li><code>read-only</code>: Requires a broker restart for update</li> |
|
<li><code>per-broker</code>: May be updated dynamically for each broker</li> |
|
<li><code>cluster-wide</code>: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.</li> |
|
</ul> |
|
|
|
To alter the current broker configs for broker id 0 (for example, the number of log cleaner threads): |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 |
|
</pre> |
|
|
|
To describe the current dynamic broker configs for broker id 0: |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --describe |
|
</pre> |
|
|
|
To delete a config override and revert to the statically configured or default value for broker id 0 (for example, |
|
the number of log cleaner threads): |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --alter --delete-config log.cleaner.threads |
|
</pre> |
|
|
|
Some configs may be configured as a cluster-wide default to maintain consistent values across the whole cluster. All brokers |
|
in the cluster will process the cluster default update. For example, to update log cleaner threads on all brokers: |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.cleaner.threads=2 |
|
</pre> |
|
|
|
To describe the currently configured dynamic cluster-wide default configs: |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe |
|
</pre> |
|
|
|
All configs that are configurable at cluster level may also be configured at per-broker level (e.g. for testing). |
|
If a config value is defined at different levels, the following order of precedence is used: |
|
<ul> |
|
<li>Dynamic per-broker config stored in ZooKeeper</li> |
|
<li>Dynamic cluster-wide default config stored in ZooKeeper</li> |
|
<li>Static broker config from <code>server.properties</code></li> |
|
<li>Kafka default, see <a href="#brokerconfigs">broker configs</a></li> |
|
</ul> |
|
|
|
<h5>Updating Password Configs Dynamically</h5> |
|
<p>Password config values that are dynamically updated are encrypted before storing in ZooKeeper. The broker config |
|
<code>password.encoder.secret</code> must be configured in <code>server.properties</code> to enable dynamic update |
|
of password configs. The secret may be different on different brokers.</p> |
|
<p>The secret used for password encoding may be rotated with a rolling restart of brokers. The old secret used for encoding |
|
passwords currently in ZooKeeper must be provided in the static broker config <code>password.encoder.old.secret</code> and |
|
the new secret must be provided in <code>password.encoder.secret</code>. All dynamic password configs stored in ZooKeeper |
|
will be re-encoded with the new secret when the broker starts up.</p> |
|
<p>In Kafka 1.1.x, all dynamically updated password configs must be provided in every alter request when updating configs |
|
using <code>kafka-configs.sh</code> even if the password config is not being altered. This constraint will be removed in |
|
a future release.</p> |
|
|
|
<h5>Updating Password Configs in ZooKeeper Before Starting Brokers</h5> |
|
|
|
From Kafka 2.0.0 onwards, <code>kafka-configs.sh</code> enables dynamic broker configs to be updated using ZooKeeper before |
|
starting brokers for bootstrapping. This enables all password configs to be stored in encrypted form, avoiding the need for |
|
clear passwords in <code>server.properties</code>. The broker config <code>password.encoder.secret</code> must also be specified |
|
if any password configs are included in the alter command. Additional encryption parameters may also be specified. Password |
|
encoder configs will not be persisted in ZooKeeper. For example, to store SSL key password for listener <code>INTERNAL</code> |
|
on broker 0: |
|
|
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type brokers --entity-name 0 --alter --add-config |
|
'listener.name.internal.ssl.key.password=key-password,password.encoder.secret=secret,password.encoder.iterations=8192' |
|
</pre> |
|
|
|
The configuration <code>listener.name.internal.ssl.key.password</code> will be persisted in ZooKeeper in encrypted |
|
form using the provided encoder configs. The encoder secret and iterations are not persisted in ZooKeeper. |
|
|
|
<h5>Updating SSL Keystore of an Existing Listener</h5> |
|
Brokers may be configured with SSL keystores with short validity periods to reduce the risk of compromised certificates. |
|
Keystores may be updated dynamically without restarting the broker. The config name must be prefixed with the listener prefix |
|
<code>listener.name.{listenerName}.</code> so that only the keystore config of a specific listener is updated. |
|
The following configs may be updated in a single alter request at per-broker level: |
|
<ul> |
|
<li><code>ssl.keystore.type</code></li> |
|
<li><code>ssl.keystore.location</code></li> |
|
<li><code>ssl.keystore.password</code></li> |
|
<li><code>ssl.key.password</code></li> |
|
</ul> |
|
If the listener is the inter-broker listener, the update is allowed only if the new keystore is trusted by the truststore |
|
configured for that listener. For other listeners, no trust validation is performed on the keystore by the broker. Certificates |
|
must be signed by the same certificate authority that signed the old certificate to avoid any client authentication failures. |
|
|
|
<h5>Updating SSL Truststore of an Existing Listener</h5> |
|
Broker truststores may be updated dynamically without restarting the broker to add or remove certificates. |
|
Updated truststore will be used to authenticate new client connections. The config name must be prefixed with the |
|
listener prefix <code>listener.name.{listenerName}.</code> so that only the truststore config of a specific listener |
|
is updated. The following configs may be updated in a single alter request at per-broker level: |
|
<ul> |
|
<li><code>ssl.truststore.type</code></li> |
|
<li><code>ssl.truststore.location</code></li> |
|
<li><code>ssl.truststore.password</code></li> |
|
</ul> |
|
If the listener is the inter-broker listener, the update is allowed only if the existing keystore for that listener is trusted by |
|
the new truststore. For other listeners, no trust validation is performed by the broker before the update. Removal of CA certificates |
|
used to sign client certificates from the new truststore can lead to client authentication failures. |
|
|
|
<h5>Updating Default Topic Configuration</h5> |
|
Default topic configuration options used by brokers may be updated without broker restart. The configs are applied to topics |
|
without a topic config override for the equivalent per-topic config. One or more of these configs may be overridden at |
|
cluster-default level used by all brokers. |
|
<ul> |
|
<li><code>log.segment.bytes</code></li> |
|
<li><code>log.roll.ms</code></li> |
|
<li><code>log.roll.hours</code></li> |
|
<li><code>log.roll.jitter.ms</code></li> |
|
<li><code>log.roll.jitter.hours</code></li> |
|
<li><code>log.index.size.max.bytes</code></li> |
|
<li><code>log.flush.interval.messages</code></li> |
|
<li><code>log.flush.interval.ms</code></li> |
|
<li><code>log.retention.bytes</code></li> |
|
<li><code>log.retention.ms</code></li> |
|
<li><code>log.retention.minutes</code></li> |
|
<li><code>log.retention.hours</code></li> |
|
<li><code>log.index.interval.bytes</code></li> |
|
<li><code>log.cleaner.delete.retention.ms</code></li> |
|
<li><code>log.cleaner.min.compaction.lag.ms</code></li> |
|
<li><code>log.cleaner.max.compaction.lag.ms</code></li> |
|
<li><code>log.cleaner.min.cleanable.ratio</code></li> |
|
<li><code>log.cleanup.policy</code></li> |
|
<li><code>log.segment.delete.delay.ms</code></li> |
|
<li><code>unclean.leader.election.enable</code></li> |
|
<li><code>min.insync.replicas</code></li> |
|
<li><code>max.message.bytes</code></li> |
|
<li><code>compression.type</code></li> |
|
<li><code>log.preallocate</code></li> |
|
<li><code>log.message.timestamp.type</code></li> |
|
<li><code>log.message.timestamp.difference.max.ms</code></li> |
|
</ul> |
|
|
|
From Kafka version 2.0.0 onwards, unclean leader election is automatically enabled by the controller when the config |
|
<code>unclean.leader.election.enable</code> is dynamically updated. |
|
In Kafka version 1.1.x, changes to <code>unclean.leader.election.enable</code> take effect only when a new controller is elected. |
|
Controller re-election may be forced by running: |
|
|
|
<pre class="brush: bash;"> |
|
> bin/zookeeper-shell.sh localhost |
|
rmr /controller |
|
</pre> |
|
|
|
<h5>Updating Log Cleaner Configs</h5> |
|
Log cleaner configs may be updated dynamically at cluster-default level used by all brokers. The changes take effect |
|
on the next iteration of log cleaning. One or more of these configs may be updated: |
|
<ul> |
|
<li><code>log.cleaner.threads</code></li> |
|
<li><code>log.cleaner.io.max.bytes.per.second</code></li> |
|
<li><code>log.cleaner.dedupe.buffer.size</code></li> |
|
<li><code>log.cleaner.io.buffer.size</code></li> |
|
<li><code>log.cleaner.io.buffer.load.factor</code></li> |
|
<li><code>log.cleaner.backoff.ms</code></li> |
|
</ul> |
|
|
|
<h5>Updating Thread Configs</h5> |
|
The size of various thread pools used by the broker may be updated dynamically at cluster-default level used by all brokers. |
|
Updates are restricted to the range <code>currentSize / 2</code> to <code>currentSize * 2</code> to ensure that config updates are |
|
handled gracefully. |
|
<ul> |
|
<li><code>num.network.threads</code></li> |
|
<li><code>num.io.threads</code></li> |
|
<li><code>num.replica.fetchers</code></li> |
|
<li><code>num.recovery.threads.per.data.dir</code></li> |
|
<li><code>log.cleaner.threads</code></li> |
|
<li><code>background.threads</code></li> |
|
</ul> |
|
|
|
<h5>Updating ConnectionQuota Configs</h5> |
|
The maximum number of connections allowed for a given IP/host by the broker may be updated dynamically at cluster-default level used by all brokers. |
|
The changes will apply for new connection creations and the existing connections count will be taken into account by the new limits. |
|
<ul> |
|
<li><code>max.connections.per.ip</code></li> |
|
<li><code>max.connections.per.ip.overrides</code></li> |
|
</ul> |
|
|
|
<h5>Adding and Removing Listeners</h5> |
|
<p>Listeners may be added or removed dynamically. When a new listener is added, security configs of the listener must be provided |
|
as listener configs with the listener prefix <code>listener.name.{listenerName}.</code>. If the new listener uses SASL, |
|
the JAAS configuration of the listener must be provided using the JAAS configuration property <code>sasl.jaas.config</code> |
|
with the listener and mechanism prefix. See <a href="#security_jaas_broker">JAAS configuration for Kafka brokers</a> for details.</p> |
|
|
|
<p>In Kafka version 1.1.x, the listener used by the inter-broker listener may not be updated dynamically. To update the inter-broker |
|
listener to a new listener, the new listener may be added on all brokers without restarting the broker. A rolling restart is then |
|
required to update <code>inter.broker.listener.name</code>.</p> |
|
|
|
In addition to all the security configs of new listeners, the following configs may be updated dynamically at per-broker level: |
|
<ul> |
|
<li><code>listeners</code></li> |
|
<li><code>advertised.listeners</code></li> |
|
<li><code>listener.security.protocol.map</code></li> |
|
</ul> |
|
Inter-broker listener must be configured using the static broker configuration <code>inter.broker.listener.name</code> |
|
or <code>inter.broker.security.protocol</code>. |
|
|
|
<h3><a id="topicconfigs" href="#topicconfigs">3.2 Topic-Level Configs</a></h3> |
|
|
|
Configurations pertinent to topics have both a server default as well an optional per-topic override. If no per-topic configuration is given the server default is used. The override can be set at topic creation time by giving one or more <code>--config</code> options. This example creates a topic named <i>my-topic</i> with a custom max message size and flush rate: |
|
<pre class="brush: bash;"> |
|
> bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 \ |
|
--replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1 |
|
</pre> |
|
Overrides can also be changed or set later using the alter configs command. This example updates the max message size for <i>my-topic</i>: |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic |
|
--alter --add-config max.message.bytes=128000 |
|
</pre> |
|
|
|
To check overrides set on the topic you can do |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic --describe |
|
</pre> |
|
|
|
To remove an override you can do |
|
<pre class="brush: bash;"> |
|
> bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --entity-name my-topic |
|
--alter --delete-config max.message.bytes |
|
</pre> |
|
|
|
The following are the topic-level configurations. The server's default configuration for this property is given under the Server Default Property heading. A given server default config value only applies to a topic if it does not have an explicit topic config override. |
|
|
|
<!--#include virtual="generated/topic_config.html" --> |
|
|
|
<h3><a id="producerconfigs" href="#producerconfigs">3.3 Producer Configs</a></h3> |
|
|
|
Below is the configuration of the producer: |
|
<!--#include virtual="generated/producer_config.html" --> |
|
|
|
<h3><a id="consumerconfigs" href="#consumerconfigs">3.4 Consumer Configs</a></h3> |
|
|
|
Below is the configuration for the consumer: |
|
<!--#include virtual="generated/consumer_config.html" --> |
|
|
|
<h3><a id="connectconfigs" href="#connectconfigs">3.5 Kafka Connect Configs</a></h3> |
|
Below is the configuration of the Kafka Connect framework. |
|
<!--#include virtual="generated/connect_config.html" --> |
|
|
|
<h4><a id="sourceconnectconfigs" href="#sourceconnectconfigs">3.5.1 Source Connector Configs</a></h4> |
|
Below is the configuration of a source connector. |
|
<!--#include virtual="generated/source_connector_config.html" --> |
|
|
|
<h4><a id="sinkconnectconfigs" href="#sinkconnectconfigs">3.5.2 Sink Connector Configs</a></h4> |
|
Below is the configuration of a sink connector. |
|
<!--#include virtual="generated/sink_connector_config.html" --> |
|
|
|
<h3><a id="streamsconfigs" href="#streamsconfigs">3.6 Kafka Streams Configs</a></h3> |
|
Below is the configuration of the Kafka Streams client library. |
|
<!--#include virtual="generated/streams_config.html" --> |
|
|
|
<h3><a id="adminclientconfigs" href="#adminclientconfigs">3.7 Admin Configs</a></h3> |
|
Below is the configuration of the Kafka Admin client library. |
|
<!--#include virtual="generated/admin_client_config.html" --> |
|
</script> |
|
|
|
<div class="p-configuration"></div>
|
|
|