diff --git a/docs/images/streams-concepts-topology.jpg b/docs/images/streams-concepts-topology.jpg new file mode 100644 index 00000000000..832f6d43a42 Binary files /dev/null and b/docs/images/streams-concepts-topology.jpg differ diff --git a/docs/images/streams-table-duality-01.png b/docs/images/streams-table-duality-01.png new file mode 100644 index 00000000000..4fa4d1bf8e4 Binary files /dev/null and b/docs/images/streams-table-duality-01.png differ diff --git a/docs/images/streams-table-duality-02.png b/docs/images/streams-table-duality-02.png new file mode 100644 index 00000000000..4e805c10ff5 Binary files /dev/null and b/docs/images/streams-table-duality-02.png differ diff --git a/docs/images/streams-table-duality-03.png b/docs/images/streams-table-duality-03.png new file mode 100644 index 00000000000..b0b04f59176 Binary files /dev/null and b/docs/images/streams-table-duality-03.png differ diff --git a/docs/images/streams-table-updates-01.png b/docs/images/streams-table-updates-01.png new file mode 100644 index 00000000000..3a2c35ef3b9 Binary files /dev/null and b/docs/images/streams-table-updates-01.png differ diff --git a/docs/images/streams-table-updates-02.png b/docs/images/streams-table-updates-02.png new file mode 100644 index 00000000000..a0a5b1ff53f Binary files /dev/null and b/docs/images/streams-table-updates-02.png differ diff --git a/docs/js/templateData.js b/docs/js/templateData.js index 40c5da195b4..fbb9e4e1b09 100644 --- a/docs/js/templateData.js +++ b/docs/js/templateData.js @@ -18,4 +18,5 @@ limitations under the License. // Define variables for doc templates var context={ "version": "0101" + "dotVersion": "0.10.1" }; \ No newline at end of file diff --git a/docs/quickstart.html b/docs/quickstart.html index 763d3e36ff0..2080cc4dbd7 100644 --- a/docs/quickstart.html +++ b/docs/quickstart.html @@ -279,18 +279,30 @@ data in the topic (or use custom consumer code to process it):
Kafka Streams is a client library of Kafka for real-time stream processing and analyzing data stored in Kafka brokers.
This quickstart example will demonstrate how to run a streaming application coded in this library. Here is the gist
-of the WordCountDemo
example code (converted to use Java 8 lambda expressions for easy reading).
+of the WordCountDemo
example code (converted to use Java 8 lambda expressions for easy reading).
+// Serializers/deserializers (serde) for String and Long types +final Serde<String> stringSerde = Serdes.String(); +final Serde<Long> longSerde = Serdes.Long(); + +// Construct a `KStream` from the input topic ""streams-file-input", where message values +// represent lines of text (for the sake of this example, we ignore whatever may be stored +// in the message keys). +KStream<String, String> textLines = builder.stream(stringSerde, stringSerde, "streams-file-input"); + KTable<String, Long> wordCounts = textLines // Split each text line, by whitespace, into words. .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+"))) - // Ensure the words are available as record keys for the next aggregate operation. - .map((key, value) -> new KeyValue<>(value, value)) + // Group the text words as message keys + .groupBy((key, value) -> value) - // Count the occurrences of each word (record key) and store the results into a table named "Counts". - .countByKey("Counts") + // Count the occurrences of each word (message key). + .count("Counts") + +// Store the running counts as a changelog stream to the output topic. +wordCounts.to(stringSerde, longSerde, "streams-wordcount-output");
@@ -303,7 +315,7 @@ unbounded input data, it will periodically output its current state and results because it cannot know when it has processed "all" the input data.
-We will now prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application. +As the first step, we will prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application.
-