Mirror of Apache Kafka
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

450 lines
21 KiB

<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<script><!--#include virtual="js/templateData.js" --></script>
<script id="quickstart-template" type="text/x-handlebars-template">
<p>
This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data.
Since Kafka console scripts are different for Unix-based and Windows platforms, on Windows platforms use <code>bin\windows\</code> instead of <code>bin/</code>, and change the script extension to <code>.bat</code>.
</p>
<h4><a id="quickstart_download" href="#quickstart_download">Step 1: Download the code</a></h4>
<a href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_2.11-{{fullDotVersion}}.tgz" title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar it.
<pre class="brush: bash;">
&gt; tar -xzf kafka_2.11-{{fullDotVersion}}.tgz
&gt; cd kafka_2.11-{{fullDotVersion}}
</pre>
<h4><a id="quickstart_startserver" href="#quickstart_startserver">Step 2: Start the server</a></h4>
<p>
Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.
</p>
<pre class="brush: bash;">
&gt; bin/zookeeper-server-start.sh config/zookeeper.properties
[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
...
</pre>
<p>Now start the Kafka server:</p>
<pre class="brush: bash;">
&gt; bin/kafka-server-start.sh config/server.properties
[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)
[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)
...
</pre>
<h4><a id="quickstart_createtopic" href="#quickstart_createtopic">Step 3: Create a topic</a></h4>
<p>Let's create a topic named "test" with a single partition and only one replica:</p>
<pre class="brush: bash;">
&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
</pre>
<p>We can now see that topic if we run the list topic command:</p>
<pre class="brush: bash;">
&gt; bin/kafka-topics.sh --list --zookeeper localhost:2181
test
</pre>
<p>Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.</p>
<h4><a id="quickstart_send" href="#quickstart_send">Step 4: Send some messages</a></h4>
<p>Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.</p>
<p>
Run the producer and then type a few messages into the console to send to the server.</p>
<pre class="brush: bash;">
&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message
This is another message
</pre>
<h4><a id="quickstart_consume" href="#quickstart_consume">Step 5: Start a consumer</a></h4>
<p>Kafka also has a command line consumer that will dump out messages to standard output.</p>
<pre class="brush: bash;">
&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
This is a message
This is another message
</pre>
<p>
If you have each of the above commands running in a different terminal then you should now be able to type messages into the producer terminal and see them appear in the consumer terminal.
</p>
<p>
All of the command line tools have additional options; running the command with no arguments will display usage information documenting them in more detail.
</p>
<h4><a id="quickstart_multibroker" href="#quickstart_multibroker">Step 6: Setting up a multi-broker cluster</a></h4>
<p>So far we have been running against a single broker, but that's no fun. For Kafka, a single broker is just a cluster of size one, so nothing much changes other than starting a few more broker instances. But just to get feel for it, let's expand our cluster to three nodes (still all on our local machine).</p>
<p>
First we make a config file for each of the brokers (on Windows use the <code>copy</code> command instead):
</p>
<pre class="brush: bash;">
&gt; cp config/server.properties config/server-1.properties
&gt; cp config/server.properties config/server-2.properties
</pre>
<p>
Now edit these new files and set the following properties:
</p>
<pre class="brush: text;">
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dir=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dir=/tmp/kafka-logs-2
</pre>
<p>The <code>broker.id</code> property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.</p>
<p>
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
</p>
<pre class="brush: bash;">
&gt; bin/kafka-server-start.sh config/server-1.properties &amp;
...
&gt; bin/kafka-server-start.sh config/server-2.properties &amp;
...
</pre>
<p>Now create a new topic with a replication factor of three:</p>
<pre class="brush: bash;">
&gt; bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
</pre>
<p>Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:</p>
<pre class="brush: bash;">
&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
</pre>
<p>Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.</p>
<ul>
<li>"leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
<li>"replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
<li>"isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
</ul>
<p>Note that in my example node 1 is the leader for the only partition of the topic.</p>
<p>
We can run the same command on the original topic we created to see where it is:
</p>
<pre class="brush: bash;">
&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
</pre>
<p>So there is no surprise there&mdash;the original topic has no replicas and is on server 0, the only server in our cluster when we created it.</p>
<p>
Let's publish a few messages to our new topic:
</p>
<pre class="brush: bash;">
&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
...
my test message 1
my test message 2
^C
</pre>
<p>Now let's consume these messages:</p>
<pre class="brush: bash;">
&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C
</pre>
<p>Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:</p>
<pre class="brush: bash;">
&gt; ps aux | grep server-1.properties
7564 ttys002 0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
&gt; kill -9 7564
</pre>
On Windows use:
<pre class="brush: bash;">
&gt; wmic process get processid,caption,commandline | find "java.exe" | find "server-1.properties"
java.exe java -Xmx1G -Xms1G -server -XX:+UseG1GC ... build\libs\kafka_2.11-{{fullDotVersion}}.jar" kafka.Kafka config\server-1.properties 644
&gt; taskkill /pid 644 /f
</pre>
<p>Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:</p>
<pre class="brush: bash;">
&gt; bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 1,2,0 Isr: 2,0
</pre>
<p>But the messages are still available for consumption even though the leader that took the writes originally is down:</p>
<pre class="brush: bash;">
&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C
</pre>
<h4><a id="quickstart_kafkaconnect" href="#quickstart_kafkaconnect">Step 7: Use Kafka Connect to import/export data</a></h4>
<p>Writing data from the console and writing it back to the console is a convenient place to start, but you'll probably want
to use data from other sources or export data from Kafka to other systems. For many systems, instead of writing custom
integration code you can use Kafka Connect to import or export data.</p>
<p>Kafka Connect is a tool included with Kafka that imports and exports data to Kafka. It is an extensible tool that runs
<i>connectors</i>, which implement the custom logic for interacting with an external system. In this quickstart we'll see
how to run Kafka Connect with simple connectors that import data from a file to a Kafka topic and export data from a
Kafka topic to a file.</p>
<p>First, we'll start by creating some seed data to test with:</p>
<pre class="brush: bash;">
&gt; echo -e "foo\nbar" > test.txt
</pre>
<p>Next, we'll start two connectors running in <i>standalone</i> mode, which means they run in a single, local, dedicated
process. We provide three configuration files as parameters. The first is always the configuration for the Kafka Connect
process, containing common configuration such as the Kafka brokers to connect to and the serialization format for data.
The remaining configuration files each specify a connector to create. These files include a unique connector name, the connector
class to instantiate, and any other configuration required by the connector.</p>
<pre class="brush: bash;">
&gt; bin/connect-standalone.sh config/connect-standalone.properties config/connect-file-source.properties config/connect-file-sink.properties
</pre>
<p>
These sample configuration files, included with Kafka, use the default local cluster configuration you started earlier
and create two connectors: the first is a source connector that reads lines from an input file and produces each to a Kafka topic
and the second is a sink connector that reads messages from a Kafka topic and produces each as a line in an output file.
</p>
<p>
During startup you'll see a number of log messages, including some indicating that the connectors are being instantiated.
Once the Kafka Connect process has started, the source connector should start reading lines from <code>test.txt</code> and
producing them to the topic <code>connect-test</code>, and the sink connector should start reading messages from the topic <code>connect-test</code>
and write them to the file <code>test.sink.txt</code>. We can verify the data has been delivered through the entire pipeline
by examining the contents of the output file:
</p>
<pre class="brush: bash;">
&gt; cat test.sink.txt
foo
bar
</pre>
<p>
Note that the data is being stored in the Kafka topic <code>connect-test</code>, so we can also run a console consumer to see the
data in the topic (or use custom consumer code to process it):
</p>
<pre class="brush: bash;">
&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic connect-test --from-beginning
{"schema":{"type":"string","optional":false},"payload":"foo"}
{"schema":{"type":"string","optional":false},"payload":"bar"}
...
</pre>
<p>The connectors continue to process data, so we can add data to the file and see it move through the pipeline:</p>
<pre class="brush: bash;">
&gt; echo "Another line" >> test.txt
</pre>
<p>You should see the line appear in the console consumer output and in the sink file.</p>
<h4><a id="quickstart_kafkastreams" href="#quickstart_kafkastreams">Step 8: Use Kafka Streams to process data</a></h4>
<p>
Kafka Streams is a client library of Kafka for real-time stream processing and analyzing data stored in Kafka brokers.
This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist
of the <code><a href="https://github.com/apache/kafka/blob/{{dotVersion}}/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountDemo.java">WordCountDemo</a></code> example code (converted to use Java 8 lambda expressions for easy reading).
</p>
<pre class="brush: bash;">
// Serializers/deserializers (serde) for String and Long types
final Serde&lt;String&gt; stringSerde = Serdes.String();
final Serde&lt;Long&gt; longSerde = Serdes.Long();
// Construct a `KStream` from the input topic ""streams-file-input", where message values
// represent lines of text (for the sake of this example, we ignore whatever may be stored
// in the message keys).
KStream&lt;String, String&gt; textLines = builder.stream(stringSerde, stringSerde, "streams-file-input");
KTable&lt;String, Long&gt; wordCounts = textLines
// Split each text line, by whitespace, into words.
.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))
// Group the text words as message keys
.groupBy((key, value) -> value)
// Count the occurrences of each word (message key).
.count("Counts")
// Store the running counts as a changelog stream to the output topic.
wordCounts.to(stringSerde, longSerde, "streams-wordcount-output");
</pre>
<p>
It implements the WordCount
algorithm, which computes a word occurrence histogram from the input text. However, unlike other WordCount examples
you might have seen before that operate on bounded data, the WordCount demo application behaves slightly differently because it is
designed to operate on an <b>infinite, unbounded stream</b> of data. Similar to the bounded variant, it is a stateful algorithm that
tracks and updates the counts of words. However, since it must assume potentially
unbounded input data, it will periodically output its current state and results while continuing to process more data
because it cannot know when it has processed "all" the input data.
</p>
<p>
As the first step, we will prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application.
</p>
<!--
<pre>
&gt; <b>./bin/kafka-topics --create \</b>
<b>--zookeeper localhost:2181 \</b>
<b>--replication-factor 1 \</b>
<b>--partitions 1 \</b>
<b>--topic streams-file-input</b>
</pre>
-->
<pre class="brush: bash;">
&gt; echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt
</pre>
Or on Windows:
<pre class="brush: bash;">
&gt; echo all streams lead to kafka> file-input.txt
&gt; echo hello kafka streams>> file-input.txt
&gt; echo|set /p=join kafka summit>> file-input.txt
</pre>
<p>
Next, we send this input data to the input topic named <b>streams-file-input</b> using the console producer,
which reads the data from STDIN line-by-line, and publishes each line as a separate Kafka message with null key and value encoded a string to the topic (in practice,
stream data will likely be flowing continuously into Kafka where the application will be up and running):
</p>
<pre class="brush: bash;">
&gt; bin/kafka-topics.sh --create \
--zookeeper localhost:2181 \
--replication-factor 1 \
--partitions 1 \
--topic streams-file-input
</pre>
<pre class="brush: bash;">
&gt; bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-file-input < file-input.txt
</pre>
<p>
We can now run the WordCount demo application to process the input data:
</p>
<pre class="brush: bash;">
&gt; bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo
</pre>
<p>
The demo application will read from the input topic <b>streams-file-input</b>, perform the computations of the WordCount algorithm on each of the read messages,
and continuously write its current results to the output topic <b>streams-wordcount-output</b>.
Hence there won't be any STDOUT output except log entries as the results are written back into in Kafka.
The demo will run for a few seconds and then, unlike typical stream processing applications, terminate automatically.
</p>
<p>
We can now inspect the output of the WordCount demo application by reading from its output topic:
</p>
<pre class="brush: bash;">
&gt; bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \
--topic streams-wordcount-output \
--from-beginning \
--formatter kafka.tools.DefaultMessageFormatter \
--property print.key=true \
--property print.value=true \
--property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \
--property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer
</pre>
<p>
with the following output data being printed to the console:
</p>
<pre class="brush: bash;">
all 1
lead 1
to 1
hello 1
streams 2
join 1
kafka 3
summit 1
</pre>
<p>
Here, the first column is the Kafka message key in <code>java.lang.String</code> format, and the second column is the message value in <code>java.lang.Long</code> format.
Note that the output is actually a continuous stream of updates, where each data record (i.e. each line in the original output above) is
an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.
</p>
<p>
The two diagrams below illustrate what is essentially happening behind the scenes.
The first column shows the evolution of the current state of the <code>KTable&lt;String, Long&gt;</code> that is counting word occurrences for <code>count</code>.
The second column shows the change records that result from state updates to the KTable and that are being sent to the output Kafka topic <b>streams-wordcount-output</b>.
</p>
<img src="/{{version}}/images/streams-table-updates-02.png" style="float: right; width: 25%;">
<img src="/{{version}}/images/streams-table-updates-01.png" style="float: right; width: 25%;">
<p>
First the text line “all streams lead to kafka” is being processed.
The <code>KTable</code> is being built up as each new word results in a new table entry (highlighted with a green background), and a corresponding change record is sent to the downstream <code>KStream</code>.
</p>
<p>
When the second text line “hello kafka streams” is processed, we observe, for the first time, that existing entries in the <code>KTable</code> are being updated (here: for the words “kafka” and for “streams”). And again, change records are being sent to the output topic.
</p>
<p>
And so on (we skip the illustration of how the third line is being processed). This explains why the output topic has the contents we showed above, because it contains the full record of changes.
</p>
<p>
Looking beyond the scope of this concrete example, what Kafka Streams is doing here is to leverage the duality between a table and a changelog stream (here: table = the KTable, changelog stream = the downstream KStream): you can publish every change of the table to a stream, and if you consume the entire changelog stream from beginning to end, you can reconstruct the contents of the table.
</p>
<p>
Now you can write more input messages to the <b>streams-file-input</b> topic and observe additional messages added
to <b>streams-wordcount-output</b> topic, reflecting updated word counts (e.g., using the console producer and the
console consumer, as described above).
</p>
<p>You can stop the console consumer via <b>Ctrl-C</b>.</p>
</script>
<div class="p-quickstart"></div>