Browse Source

KAFKA-8227 DOCS Fixed missing links duality of streams tables (#6625)

Fixed missing links duality of streams tables

Reviewers: Jim Galasyn <jim.galasyn@confluent.io> Bill Bejeck <bbejeck@gmail.com>
pull/6634/head
Victoria Bialas 6 years ago committed by Bill Bejeck
parent
commit
dd8131499f
  1. 21
      docs/streams/core-concepts.html

21
docs/streams/core-concepts.html

@ -63,7 +63,7 @@ @@ -63,7 +63,7 @@
<ul>
<li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
<li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
<li>A <b><a id="streams_processor_node" href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
<li>A <a id="defining-a-stream-processor" href="/{{version}}/documentation/streams/developer-guide/processor-api#defining-a-stream-processor"><b>stream processor</b></a> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
</ul>
There are two special processors in the topology:
@ -161,21 +161,20 @@ @@ -161,21 +161,20 @@
<p>
Any stream processing technology must therefore provide <strong>first-class support for streams and tables</strong>.
Kafka's Streams API provides such functionality through its core abstractions for
<code class="interpreted-text" data-role="ref">streams &lt;streams_concepts_kstream&gt;</code> and
<code class="interpreted-text" data-role="ref">tables &lt;streams_concepts_ktable&gt;</code>, which we will talk about in a minute.
Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
the so-called stream-table duality.
And Kafka exploits this duality in many ways: for example, to make your applications
<code class="interpreted-text" data-role="ref">elastic &lt;streams_developer-guide_execution-scaling&gt;</code>,
to support <code class="interpreted-text" data-role="ref">fault-tolerant stateful processing &lt;streams_developer-guide_state-store_fault-tolerance&gt;</code>,
or to run <code class="interpreted-text" data-role="ref">interactive queries &lt;streams_concepts_interactive-queries&gt;</code>
<a id="streams_concepts_kstream" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_kstream">streams</a>
and <a id="streams_concepts_ktable" href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_ktable">tables</a>,
which we will talk about in a minute. Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
the so-called stream-table duality. And Kafka exploits this duality in many ways: for example, to make your applications
<a id="streams-developer-guide-execution-scaling" href="/{{version}}/documentation/streams/developer-guide/running-app#elastic-scaling-of-your-application">elastic</a>,
to support <a id="streams_architecture_recovery" href="/{{version}}/documentation/streams/architecture#streams_architecture_recovery">fault-tolerant stateful processing</a>,
or to run <a id="streams-developer-guide-interactive-queries" href="/{{version}}/documentation/streams/developer-guide/interactive-queries#interactive-queries">interactive queries</a>
against your application's latest processing results. And, beyond its internal usage, the Kafka Streams API
also allows developers to exploit this duality in their own applications.
</p>
<p>
Before we discuss concepts such as <code class="interpreted-text" data-role="ref">aggregations &lt;streams_concepts_aggregations&gt;</code>
in Kafka Streams we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
Before we discuss concepts such as <a id="streams-developer-guide-dsl-aggregating" href="/{{version}}/documentation/streams/developer-guide/dsl-api#aggregating">aggregations</a>
in Kafka Streams, we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream.
</p>

Loading…
Cancel
Save