<li>A <b>stream</b> is the most important abstraction provided by Kafka Streams: it represents an unbounded, continuously updating data set. A stream is an ordered, replayable, and fault-tolerant sequence of immutable data records, where a <b>data record</b> is defined as a key-value pair.</li>
<li>A <b>stream processing application</b> is any program that makes use of the Kafka Streams library. It defines its computational logic through one or more <b>processor topologies</b>, where a processor topology is a graph of stream processors (nodes) that are connected by streams (edges).</li>
<li>A <b><aid="streams_processor_node"href="#streams_processor_node">stream processor</a></b> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
<li>A <aid="defining-a-stream-processor"href="/{{version}}/documentation/streams/developer-guide/processor-api#defining-a-stream-processor"><b>stream processor</b></a> is a node in the processor topology; it represents a processing step to transform data in streams by receiving one input record at a time from its upstream processors in the topology, applying its operation to it, and may subsequently produce one or more output records to its downstream processors. </li>
</ul>
There are two special processors in the topology:
@ -159,25 +159,24 @@
@@ -159,25 +159,24 @@
</p>
<p>
Any stream processing technology must therefore provide <strong>first-class support for streams and tables</strong>.
Kafka's Streams API provides such functionality through its core abstractions for
<codeclass="interpreted-text"data-role="ref">streams <streams_concepts_kstream></code> and
<codeclass="interpreted-text"data-role="ref">tables <streams_concepts_ktable></code>, which we will talk about in a minute.
Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
the so-called stream-table duality.
And Kafka exploits this duality in many ways: for example, to make your applications
to support <codeclass="interpreted-text"data-role="ref">fault-tolerant stateful processing <streams_developer-guide_state-store_fault-tolerance></code>,
or to run <codeclass="interpreted-text"data-role="ref">interactive queries <streams_concepts_interactive-queries></code>
against your application's latest processing results. And, beyond its internal usage, the Kafka Streams API
also allows developers to exploit this duality in their own applications.
</p>
<p>
Before we discuss concepts such as <codeclass="interpreted-text"data-role="ref">aggregations <streams_concepts_aggregations></code>
in Kafka Streams we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream.
</p>
Any stream processing technology must therefore provide <strong>first-class support for streams and tables</strong>.
Kafka's Streams API provides such functionality through its core abstractions for
and <aid="streams_concepts_ktable"href="/{{version}}/documentation/streams/developer-guide/dsl-api#streams_concepts_ktable">tables</a>,
which we will talk about in a minute. Now, an interesting observation is that there is actually a <strong>close relationship between streams and tables</strong>,
the so-called stream-table duality. And Kafka exploits this duality in many ways: for example, to make your applications
to support <aid="streams_architecture_recovery"href="/{{version}}/documentation/streams/architecture#streams_architecture_recovery">fault-tolerant stateful processing</a>,
or to run <aid="streams-developer-guide-interactive-queries"href="/{{version}}/documentation/streams/developer-guide/interactive-queries#interactive-queries">interactive queries</a>
against your application's latest processing results. And, beyond its internal usage, the Kafka Streams API
also allows developers to exploit this duality in their own applications.
</p>
<p>
Before we discuss concepts such as <aid="streams-developer-guide-dsl-aggregating"href="/{{version}}/documentation/streams/developer-guide/dsl-api#aggregating">aggregations</a>
in Kafka Streams, we must first introduce <strong>tables</strong> in more detail, and talk about the aforementioned stream-table duality.
Essentially, this duality means that a stream can be viewed as a table, and a table can be viewed as a stream.