In release 0.9.0.0, the Kafka community added a number of features that, used either separately or together, increases security in a Kafka cluster. The following security measures are currently supported:
<li>Authentication of connections to brokers from clients (producers and consumers), other brokers and tools, using either SSL or SASL. Kafka supports the following SASL mechanisms:
<ul>
<li>SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0</li>
<li>SASL/PLAIN - starting at version 0.10.0.0</li>
<li>SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0</li>
<li>Authentication of connections from brokers to ZooKeeper</li>
<li>Encryption of data transferred between brokers and clients, between brokers, or between brokers and tools using SSL (Note that there is a performance degradation when SSL is enabled, the magnitude of which depends on the CPU type and the JVM implementation.)</li>
<li>Authorization of read / write operations by clients</li>
<li>Authorization is pluggable and integration with external authorization services is supported</li>
It's worth noting that security is optional - non-secured clusters are supported, as well as a mix of authenticated, unauthenticated, encrypted and non-encrypted clients.
Apache Kafka allows clients to use SSL for encryption of traffic as well as authentication. By default, SSL is disabled but can be turned on if needed.
The following paragraphs explain in detail how to set up your own PKI infrastructure, use it to create certificates and configure Kafka to use these.
This command assumes that you want to add hostname information to the certificate, if this is not the case, you can omit the extension parameter <code>-ext SAN=DNS:{FQDN},IP:{IPADDRESS1}</code>. Please see below for more information on this.
<h5>Host Name Verification</h5>
Host name verification, when enabled, is the process of checking attributes from the certificate that is presented by the server you are
connecting to against the actual hostname or ip address of that server to ensure that you are indeed connecting to the correct server.<br>
The main reason for this check is to prevent man-in-the-middle attacks.
For Kafka, this check has been disabled by default for a long time, but as of Kafka 2.0.0 host name verification of servers is enabled by default
for client connections as well as inter-broker connections.<br>
Server host name verification may be disabled by setting <code>ssl.endpoint.identification.algorithm</code> to an empty string.<br>
For dynamically configured broker listeners, hostname verification may be disabled using <code>kafka-configs.sh</code>:<br>
Normally there is no good reason to disable hostname verification apart from being the quickest way to "just get it to work" followed
by the promise to "fix it later when there is more time"!<br>
Getting hostname verification right is not that hard when done at the right time, but gets much harder once the cluster is up and
running - do yourself a favor and do it now!
<p>If host name verification is enabled, clients will verify the server's fully qualified domain name (FQDN) or ip address against one of the following two fields:
<ol>
<li>Common Name (CN)</li>
<li><ahref="https://tools.ietf.org/html/rfc5280#section-4.2.1.6">Subject Alternative Name (SAN)</a></li>
</ol><br>
While Kafka checks both fields, usage of the common name field for hostname verification has been
<ahref="https://tools.ietf.org/html/rfc2818#section-3.1">deprecated</a> since 2000 and should be avoided if possible. In addition the
SAN field is much more flexible, allowing for multiple DNS and IP entries to be declared in a certificate.<br>
Another advantage is that if the SAN field is used for hostname verification the common name can be set to a more meaningful value for
authorization purposes. Since we need the SAN field to be contained in the signed certificate, it will be specified when generating the
signing request. It can also be specified when generating the keypair, but this will not automatically be copied into the signing request.<br>
To add a SAN field append the following argument <code> -ext SAN=DNS:{FQDN},IP:{IPADDRESS} </code> to the keytool command:
Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
<li>ssl.client.auth=none ("required" => client authentication is required, "requested" => client authentication is requested and client without certs can still connect. The usage of "requested" is discouraged as it provides a false sense of security and misconfigured clients will still connect successfully.)</li>
<li>ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. (Default is an empty list)</li>
<li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols that you are going to accept from clients. Do note that SSL is deprecated in favor of TLS and using SSL in production is not recommended)</li>
Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the <ahref="http://www.oracle.com/technetwork/java/javase/downloads/index.html">JCE Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed in the JDK/JRE. See the
<ahref="https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html">JCA Providers Documentation</a> for more information.
</p>
<p>
The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the
implementation used with the <code>ssl.secure.random.implementation</code>. However, there are performance issues with some implementations (notably, the
default chosen on Linux systems, <code>NativePRNG</code>, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
consider explicitly setting the implementation to be used. The <code>SHA1PRNG</code> implementation is non-blocking, and has shown very good performance
SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported. The configs for SSL will be the same for both producer and consumer.<br>
If client authentication is not required in the broker, then the following is a minimal configuration example:
Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
Other configuration settings that may also be needed depending on our requirements and the broker configuration:
<ol>
<li>ssl.provider (Optional). The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</li>
<li>ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.</li>
<li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at least one of the protocols configured on the broker side</li>
<li>ssl.truststore.type=JKS</li>
<li>ssl.keystore.type=JKS</li>
</ol>
<br>
Examples using console-producer and console-consumer:
If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (<ahref="https://help.ubuntu.com/community/Kerberos">Ubuntu</a>, <ahref="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html">Redhat</a>). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security.</li>
<li><b>Create Kerberos Principals</b><br>
If you are using the organization's Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).</br>
If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
<li><b>Make sure all hosts can be reachable using hostnames</b> - it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.</li>
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):
<tt>KafkaServer</tt> section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
allows the broker to login using the keytab specified in this section. See <ahref="#security_jaas_broker">notes</a> for more details on Zookeeper SASL configuration.
<li>Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see <ahref="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
</li>We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/kafka1.hostname.com@EXAMPLE.com", so:
JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described <ahref="#security_client_staticjaas">here</a>. Clients use the login section named
<tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</li>
<li>Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.</li>
<li>Optionally pass the krb5 file locations as JVM parameters to each client JVM (see <ahref="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
<li><h4><aid="security_sasl_plain"href="#security_sasl_plain">Authentication using SASL/PLAIN</a></h4>
<p>SASL/PLAIN is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication.
Kafka supports a default implementation for SASL/PLAIN which can be extended for production use as described <ahref="#security_sasl_plain_production">here</a>.</p>
The username is used as the authenticated <code>Principal</code> for configuration of ACLs etc.
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
<li><h4><aid="security_sasl_oauthbearer"href="#security_sasl_oauthbearer">Authentication using SASL/OAUTHBEARER</a></h4>
<p>The <ahref="https://tools.ietf.org/html/rfc6749">OAuth 2 Authorization Framework</a> "enables a third-party application to obtain limited access to an HTTP service,
either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP
service, or by allowing the third-party application to obtain access on its own behalf." The SASL OAUTHBEARER mechanism
enables the use of the framework in a SASL (i.e. a non-HTTP) context; it is defined in <ahref="https://tools.ietf.org/html/rfc7628">RFC 7628</a>.
The default OAUTHBEARER implementation in Kafka creates and validates <ahref="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Tokens</a>
and is only suitable for use in non-production Kafka installations. Refer to <ahref="#security_sasl_oauthbearer_security">Security Considerations</a>
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
<li><h5><aid="security_sasl_oauthbearer_unsecured_retrieval"href="#security_sasl_oauthbearer_unsecured_retrieval">Unsecured Token Creation Options for SASL/OAUTHBEARER</a></h5>
<ul>
<li>The default implementation of SASL/OAUTHBEARER in Kafka creates and validates <ahref="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Tokens</a>.
While suitable only for non-production use, it does provide the flexibility to create arbitrary tokens in a DEV or TEST environment.</li>
<li>Here are the various supported JAAS module options on the client side (and on the broker side if OAUTHBEARER is the inter-broker protocol):
<table>
<tr>
<th>JAAS Module Option for Unsecured Token Creation</th>
<td>Set to a custom claim name if you wish the name of the <tt>String</tt>
claim holding the principal name to be something other than '<tt>sub</tt>'.</td>
</tr>
<tr>
<td><tt>unsecuredLoginLifetimeSeconds</tt></td>
<td>Set to an integer value if the token expiration is to be set to something
other than the default value of 3600 seconds (which is 1 hour). The
'<tt>exp</tt>' claim will be set to reflect the expiration time.</td>
</tr>
<tr>
<td><tt>unsecuredLoginScopeClaimName</tt></td>
<td>Set to a custom claim name if you wish the name of the <tt>String</tt> or
<tt>String List</tt> claim holding any token scope to be something other than
'<tt>scope</tt>'.</td>
</tr>
</table>
</li>
</ul>
</li>
<li><h5><aid="security_sasl_oauthbearer_unsecured_validation"href="#security_sasl_oauthbearer_unsecured_validation">Unsecured Token Validation Options for SASL/OAUTHBEARER</a></h5>
<ul>
<li>Here are the various supported JAAS module options on the broker side for <ahref="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Token</a> validation:
<table>
<tr>
<th>JAAS Module Option for Unsecured Token Validation</th>
<li><h5><aid="security_sasl_oauthbearer_security"href="#security_sasl_oauthbearer_security">Security Considerations for SASL/OAUTHBEARER</a></h5>
<ul>
<li>The default implementation of SASL/OAUTHBEARER in Kafka creates and validates <ahref="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Tokens</a>.
This is suitable only for non-production use.</li>
<li>OAUTHBEARER should be used in production enviromnments only with TLS-encryption to prevent interception of tokens.</li>
<li>The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments)
using custom login and SASL Server callback handlers as described above.</li>
<li>For more details on OAuth 2 security considerations in general, refer to <ahref="https://tools.ietf.org/html/rfc6749#section-10">RFC 6749, Section 10</a>.</li>
<li><h4><aid="saslmechanism_rolling_upgrade"href="#saslmechanism_rolling_upgrade">Modifying SASL mechanism in a Running Cluster</a></h4>
<p>SASL mechanism can be modified in a running cluster using the following sequence:</p>
<ol>
<li>Enable new SASL mechanism by adding the mechanism to <tt>sasl.enabled.mechanisms</tt> in server.properties for each broker. Update JAAS config file to include both
mechanisms as described <ahref="#security_sasl_multimechanism">here</a>. Incrementally bounce the cluster nodes.</li>
<li>Restart clients using the new mechanism.</li>
<li>To change the mechanism of inter-broker communication (if this is required), set <tt>sasl.mechanism.inter.broker.protocol</tt> in server.properties to the new mechanism and
incrementally bounce the cluster again.</li>
<li>To remove old mechanism (if this is required), remove the old mechanism from <tt>sasl.enabled.mechanisms</tt> in server.properties and remove the entries for the
old mechanism from JAAS config file. Incrementally bounce the cluster again.</li>
<li><h4><aid="security_delegation_token"href="#security_delegation_token">Authentication using Delegation Tokens</a></h4>
<p>Delegation token based authentication is a lightweight authentication mechanism to complement existing SASL/SSL
methods. Delegation tokens are shared secrets between kafka brokers and clients. Delegation tokens will help processing
frameworks to distribute the workload to available workers in a secure environment without the added cost of distributing
Kerberos TGT/keytabs or keystores when 2-way SSL is used. See <ahref="https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka">KIP-48</a>
for more details.</p>
<p>Typical steps for delegation token usage are:</p>
<ol>
<li>User authenticates with the Kafka cluster via SASL or SSL, and obtains a delegation token. This can be done
Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the acls. The Authorizer is configured by setting <tt>authorizer.class.name</tt> in server.properties. To enable the out of the box implementation use:
Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in <ahref="https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface">KIP-11</a> and resource patterns in <ahref="https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs">KIP-290</a>. In order to add, remove or list acls you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties.
One can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string "User" is case sensitive.
<h5><aid="security_authz_ssl"href="#security_authz_ssl">Customizing SSL User Name</a></h5>
By default, the SSL user name will be of the form "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown". One can change that by setting <code>ssl.principal.mapping.rules</code> to a customized rule in server.properties.
This config allows a list of rules for mapping X.500 distinguished name to short name. The rules are evaluated in order and the first rule that matches a distinguished name is used to map it to a short name. Any later rules in the list are ignored.
<br>The format of <code>ssl.principal.mapping.rules</code> is a list where each rule starts with "RULE:" and contains an expression as the following formats. Default rule will return
string representation of the X.500 certificate distinguished name. If the distinguished name matches the pattern, then the replacement command will be run over the name.
This also supports lowercase/uppercase options, to force the translated result to be all lower/uppercase case. This is done by adding a "/L" or "/U' to the end of the rule.
<pre>
RULE:pattern/replacement/
RULE:pattern/replacement/[LU]
</pre>
Example <code>ssl.principal.mapping.rules</code> values are:
By default, the SASL user name will be the primary part of the Kerberos principal. One can change that by setting <code>sasl.kerberos.principal.to.local.rules</code> to a customized rule in server.properties.
The format of <code>sasl.kerberos.principal.to.local.rules</code> is a list where each rule works in the same way as the auth_to_local in <ahref="http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos configuration file (krb5.conf)</a>. This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a "/L" or "/U" to the end of the rule. check below formats for syntax.
<h4><aid="security_authz_cli"href="#security_authz_cli">Command Line Interface</a></h4>
Kafka Authorization management CLI can be found under bin directory with all the other CLIs. The CLI script is called <b>kafka-acls.sh</b>. Following lists all the options that the script supports:
<td>key=val pairs that will be passed to authorizer for initialization. For the default authorizer the example values are: zookeeper.connect=localhost:2181</td>
<td>A list of host/port pairs to use for establishing the connection to the Kafka cluster. Only one of --bootstrap-server or --authorizer option must be specified.</td>
<td></td>
<td>Configuration</td>
</tr>
<tr>
<td>--command-config</td>
<td>A property file containing configs to be passed to Admin Client. This option can only be used with --bootstrap-server option.</td>
<td>Indicates to the script the type of resource pattern, (for --add), or resource pattern filter, (for --list and --remove), the user wishes to use.<br>
When adding acls, this should be a specific pattern type, e.g. 'literal' or 'prefixed'.<br>
When listing or removing acls, a specific pattern type filter can be used to list or remove acls from a specific type of resource pattern,
or the filter values of 'any' or 'match' can be used, where 'any' will match any pattern type, but will match the resource name exactly,
and 'match' will perform pattern matching to list or remove all acls that affect the supplied resource(s).<br>
WARNING: 'match', when used in combination with the '--remove' switch, should be used with care.
<td>Principal is in PrincipalType:name format that will be added to ACL with Allow permission. Default PrincipalType string "User" is case sensitive. <br>You can specify multiple --allow-principal in a single command.</td>
<td>Principal is in PrincipalType:name format that will be added to ACL with Deny permission. Default PrincipalType string "User" is case sensitive. <br>You can specify multiple --deny-principal in a single command.</td>
<td>Principal is in PrincipalType:name format that will be used along with --list option. Default PrincipalType string "User" is case sensitive. This will list the ACLs for the specified principal. <br>You can specify multiple --principal in a single command.</td>
Suppose you want to add an acl "Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You can do that by executing the CLI with following options:
By default, all principals that don't have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the --deny-principal and --deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands:
Above examples add acls to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group by specifying --group [group-name].
You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl "Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0"
You can do that by using the wildcard resource '*', e.g. by executing the CLI with following options:
You can add acls on prefixed resource patterns, e.g. suppose you want to add an acl "Principal User:Jane is allowed to produce to any Topic whose name starts with 'Test-' from any host".
Note, --resource-pattern-type defaults to 'literal', which only affects resources with the exact same name or, in the case of the wildcard resource name '*', a resource with any name.</li>
Removing acls is pretty much the same. The only difference is instead of --add option users will have to specify --remove option. To remove the acls added by the first example above we can execute the CLI with following options:
We can list acls for any resource by specifying the --list option with the resource. To list all acls on the literal resource pattern Test-topic, we can execute the CLI with following options:
However, it is not necessarily possible to explicitly query for acls on prefixed resource patterns that match Test-topic as the name of such patterns may not be known.
We can list <i>all</i> acls affecting Test-topic by using '--resource-pattern-type match', e.g.
<li><b>Adding or removing a principal as producer or consumer</b><br>
The most common use case for acl management are adding/removing a principal as producer or consumer so we added convenience options to handle these cases. In order to add User:Bob as a producer of Test-topic we can execute the following command:
Users having Alter permission on ClusterResource can use Admin API for ACL management. kafka-acls.sh script supports AdminClient API to manage ACLs without interacting with zookeeper/authorizer directly.
<h3><aid="security_rolling_upgrade"href="#security_rolling_upgrade">7.5 Incorporating Security Features in a Running Cluster</a></h3>
You can secure a running cluster via one or more of the supported protocols discussed previously. This is done in phases:
<p></p>
<ul>
<li>Incrementally bounce the cluster nodes to open additional secured port(s).</li>
<li>Restart clients using the secured rather than PLAINTEXT port (assuming you are securing the client-broker connection).</li>
<li>Incrementally bounce the cluster again to enable broker-to-broker security (if this is required)</li>
<li>A final incremental bounce to close the PLAINTEXT port.</li>
</ul>
<p></p>
The specific steps for configuring SSL and SASL are described in sections <ahref="#security_ssl">7.2</a> and <ahref="#security_sasl">7.3</a>.
Follow these steps to enable security for your desired protocol(s).
<p></p>
The security implementation lets you configure different protocols for both broker-client and broker-broker communication.
These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout so brokers and/or clients can continue to communicate.
<p></p>
When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's also good practice to wait for restarted replicas to return to the ISR list before moving onto the next node.
As an example, say we wish to encrypt both broker-client and broker-broker communication with SSL. In the first incremental bounce, an SSL port is opened on each node:
Alternatively we might choose to open multiple ports so that different protocols can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL authentication to the broker-client connection also. We would achieve this by opening two additional ports during the first bounce:
ZooKeeper supports mutual TLS (mTLS) authentication beginning with the 3.5.x versions.
Kafka supports authenticating to ZooKeeper with SASL and mTLS -- either individually or both together --
beginning with version 2.5. See
<ahref="https://cwiki.apache.org/confluence/display/KAFKA/KIP-515%3A+Enable+ZK+client+to+use+the+new+TLS+supported+authentication">KIP-515: Enable ZK client to use the new TLS supported authentication</a>
for more details.
<p>When using mTLS alone, every broker and any CLI tools (such as the <ahref="#zk_authz_migration">ZooKeeper Security Migration Tool</a>)
should identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed.
This can be changed as described below, but it involves writing and deploying a custom ZooKeeper authentication provider.
Generally each certificate should have the same DN but a different Subject Alternative Name (SAN)
so that hostname verification of the brokers and any CLI tools by ZooKeeper will succeed.
<p>
When using SASL authentication to ZooKeeper together with mTLS, both the SASL identity and
either the DN that created the znode (i.e. the creating broker's certificate)
or the DN of the Security Migration Tool (if migration was performed after the znode was created)
will be ACL'ed, and all brokers and CLI tools will be authorized even if they all use different DNs
because they will all use the same ACL'ed SASL identity.
It is only when using mTLS authentication alone that all the DNs must match (and SANs become critical --
again, in the absence of writing and deploying a custom ZooKeeper authentication provider as described below).
</p>
</p>
<p>
Use the broker properties file to set TLS configs for brokers as described below.
</p>
<p>
Use the <tt>--zk-tls-config-file <file></tt> option to set TLS configs in the Zookeeper Security Migration Tool.
The <tt>kafka-acls.sh</tt> and <tt>kafka-configs.sh</tt> CLI tools also support the <tt>--zk-tls-config-file <file></tt> option.
</p>
<p>
Use the <tt>-zk-tls-config-file <file></tt> option (note the single-dash rather than double-dash)
to set TLS configs for the <tt>zookeeper-shell.sh</tt> CLI tool.
The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but can only be modified by the brokers. The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of that data can cause cluster disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only brokers and some admin tools need access to ZooKeeper).
If you are running a version of Kafka that does not support security or simply with security disabled, and you want to make the cluster secure, then you need to execute the following steps to enable ZooKeeper authentication with minimal disruption to your operations:
<li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations (including connecting to the TLS-enabled ZooKeeper port) as required, which enables brokers to authenticate to ZooKeeper. At the end of the rolling restart, brokers are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs</li>
<li>If you enabled mTLS, disable the non-TLS port in ZooKeeper</li>
<li>Perform a second rolling restart of brokers, this time setting the configuration parameter <tt>zookeeper.set.acl</tt> to true, which enables the use of secure ACLs when creating znodes</li>
<li>Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: <tt>bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file <file></code> option if you enable mTLS.</li>
<li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations, which enables brokers to authenticate, but setting <tt>zookeeper.set.acl</tt> to false. At the end of the rolling restart, brokers stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes</li>
<li>Execute the ZkSecurityMigrator tool. To execute the tool, run this script <tt>bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file <file></code> option if you need to set TLS configuration.</li></li>
<li>If you are disabling mTLS, enable the non-TLS port in ZooKeeper</li>
<li>Perform a second rolling restart of brokers, this time omitting the system property that sets the JAAS login file and/or removing ZooKeeper mutual TLS configuration (including connecting to the non-TLS-enabled ZooKeeper port) as required</li>
<li>If you are disabling mTLS, disable the TLS port in ZooKeeper</li>
It is also necessary to enable SASL and/or mTLS authentication on the ZooKeeper ensemble. To do it, we need to perform a rolling restart of the server and set a few properties. See above for mTLS information. Please refer to the ZooKeeper documentation for more detail: