Browse Source

MINOR: fix quotes for consistent rendering

“ as opposed to " don't render consistently across browsers. On current Kafka website they render correctly in Firefox but not Chrome (“runs”) - charset issue?

Author: Michal Borowiecki <michal.borowiecki@openbet.com>
Author: mihbor <mbor81@gmail.com>

Reviewers: Guozhang Wang <wangguoz@gmail.com>

Closes #3227 from mihbor/patch-7
pull/3227/merge
Michal Borowiecki 8 years ago committed by Guozhang Wang
parent
commit
5b063252e2
  1. 18
      docs/streams.html

18
docs/streams.html

@ -193,7 +193,7 @@ @@ -193,7 +193,7 @@
</p>
<p>
It is important to understand that Kafka Streams is not a resource manager, but a library that “runs” anywhere its stream processing application runs.
It is important to understand that Kafka Streams is not a resource manager, but a library that "runs" anywhere its stream processing application runs.
Multiple instances of the application are executed either on the same machine, or spread across multiple machines and tasks can be distributed automatically
by the library to those running application instances. The assignment of partitions to tasks never changes; if an application instance fails, all its assigned
tasks will be automatically restarted on other instances and continue to consume from the same stream partitions.
@ -421,7 +421,7 @@ builder.addSource("SOURCE", "src-topic") @@ -421,7 +421,7 @@ builder.addSource("SOURCE", "src-topic")
that keep recently arrived records to use in stateful processing operations such as windowed joins or aggregation.
To take advantage of these states, users can define a state store by implementing the <code>StateStore</code> interface (the Kafka Streams library also has a few extended interfaces such as <code>KeyValueStore</code>);
in practice, though, users usually do not need to customize such a state store from scratch but can simply use the <code>Stores</code> factory to define a state store by specifying whether it should be persistent, log-backed, etc.
In the following example, a persistent key-value store named “Counts” with key type <code>String</code> and value type <code>Long</code> is created.
In the following example, a persistent key-value store named "Counts" with key type <code>String</code> and value type <code>Long</code> is created.
</p>
<pre class="brush: java;">
@ -482,12 +482,12 @@ builder.addSource("SOURCE", "src-topic") @@ -482,12 +482,12 @@ builder.addSource("SOURCE", "src-topic")
The <b>stream-table duality</b> describes the close relationship between streams and tables.
<ul>
<li><b>Stream as Table</b>: A stream can be considered a changelog of a table, where each data record in the stream captures a state change of the table. A stream is thus a table in disguise, and it can be easily turned into a “real” table by replaying the changelog from beginning to end to reconstruct the table. Similarly, in a more general analogy, aggregating data records in a stream – such as computing the total number of pageviews by user from a stream of pageview events – will return a table (here with the key and the value being the user and its corresponding pageview count, respectively).</li>
<li><b>Table as Stream</b>: A table can be considered a snapshot, at a point in time, of the latest value for each key in a stream (a stream's data records are key-value pairs). A table is thus a stream in disguise, and it can be easily turned into a “real” stream by iterating over each key-value entry in the table.</li>
<li><b>Stream as Table</b>: A stream can be considered a changelog of a table, where each data record in the stream captures a state change of the table. A stream is thus a table in disguise, and it can be easily turned into a "real" table by replaying the changelog from beginning to end to reconstruct the table. Similarly, in a more general analogy, aggregating data records in a stream - such as computing the total number of pageviews by user from a stream of pageview events - will return a table (here with the key and the value being the user and its corresponding pageview count, respectively).</li>
<li><b>Table as Stream</b>: A table can be considered a snapshot, at a point in time, of the latest value for each key in a stream (a stream's data records are key-value pairs). A table is thus a stream in disguise, and it can be easily turned into a "real" stream by iterating over each key-value entry in the table.</li>
</ul>
<p>
Let's illustrate this with an example. Imagine a table that tracks the total number of pageviews by user (first column of diagram below). Over time, whenever a new pageview event is processed, the state of the table is updated accordingly. Here, the state changes between different points in time – and different revisions of the table – can be represented as a changelog stream (second column).
Let's illustrate this with an example. Imagine a table that tracks the total number of pageviews by user (first column of diagram below). Over time, whenever a new pageview event is processed, the state of the table is updated accordingly. Here, the state changes between different points in time - and different revisions of the table - can be represented as a changelog stream (second column).
</p>
<img class="centered" src="/{{version}}/images/streams-table-duality-02.png" style="width:300px">
@ -510,7 +510,7 @@ builder.addSource("SOURCE", "src-topic") @@ -510,7 +510,7 @@ builder.addSource("SOURCE", "src-topic")
<b>GlobalKTable</b> also provides the ability to look up current values of data records by keys.
This table-lookup functionality is available through <a href="#streams_dsl_joins">join operations</a>.
To illustrate the difference between KStreams and KTables/GlobalKTables, lets imagine the following two data records are being sent to the stream:
To illustrate the difference between KStreams and KTables/GlobalKTables, let's imagine the following two data records are being sent to the stream:
<pre>
("alice", 1) --> ("alice", 3)
@ -555,8 +555,8 @@ GlobalKTable&lt;String, GenericRecord&gt; source2 = builder.globalTable("topic4" @@ -555,8 +555,8 @@ GlobalKTable&lt;String, GenericRecord&gt; source2 = builder.globalTable("topic4"
</p>
<p>
Late-arriving records are always possible in real-time data streams. However, it depends on the effective <a href="#streams_time">time semantics</a> how late records are handled. Using processing-time, the semantics are “when the data is being processed”,
which means that the notion of late records is not applicable as, by definition, no record can be late. Hence, late-arriving records only really can be considered as such (i.e. as arriving “late”) for event-time or ingestion-time semantics. In both cases,
Late-arriving records are always possible in real-time data streams. However, it depends on the effective <a href="#streams_time">time semantics</a> how late records are handled. Using processing-time, the semantics are "when the data is being processed",
which means that the notion of late records is not applicable as, by definition, no record can be late. Hence, late-arriving records only really can be considered as such (i.e. as arriving "late") for event-time or ingestion-time semantics. In both cases,
Kafka Streams is able to properly handle late-arriving records.
</p>
@ -804,7 +804,7 @@ streams.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() { @@ -804,7 +804,7 @@ streams.setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
streams.close();
</pre>
Now it's time to execute your application that uses the Kafka Streams library, which can be run just like any other Java application there is no special magic or requirement on the side of Kafka Streams.
Now it's time to execute your application that uses the Kafka Streams library, which can be run just like any other Java application - there is no special magic or requirement on the side of Kafka Streams.
For example, you can package your Java application as a fat jar file and then start the application via:
<pre class="brush: bash;">

Loading…
Cancel
Save