<spanid="streams-write-app"></span><h1>Writing a Streams Application<aclass="headerlink"href="#writing-a-streams-application"title="Permalink to this headline"></a></h1>
<li><aclass="reference internal"href="#libraries-and-maven-artifacts"id="id1">Libraries and Maven artifacts</a></li>
<li><aclass="reference internal"href="#using-kafka-streams-within-your-application-code"id="id2">Using Kafka Streams within your application code</a></li>
The computational logic of a Kafka Streams application is defined as a <aclass="reference internal"href="../core-concepts#streams_topology"><spanclass="std std-ref">processor topology</span></a>,
<dd>A high-level API that provides the most common data transformation operations such as <codeclass="docutils literal"><spanclass="pre">map</span></code>, <codeclass="docutils literal"><spanclass="pre">filter</span></code>, <codeclass="docutils literal"><spanclass="pre">join</span></code>, and <codeclass="docutils literal"><spanclass="pre">aggregations</span></code> out of the box. The DSL is the recommended starting point for developers new to Kafka Streams, and should cover many use cases and stream processing needs. If you're writing a Scala application then you can use the <ahref="dsl-api.html#scala-dsl"><spanclass="std std-ref">Kafka Streams DSL for Scala</span></a> library which removes much of the Java/Scala interoperability boilerplate as opposed to working directly with the Java DSL.</a></dd>
<dd>A low-level API that lets you add and connect processors as well as interact directly with state stores. The Processor API provides you with even more flexibility than the DSL but at the expense of requiring more manual work on the side of the application developer (e.g., more lines of code).</dd>
<td>(Optional) Kafka Streams DSL for Scala library to write Scala Kafka Streams applications. When not using SBT you will need to suffix the artifact ID with the correct version of Scala your application is using (<codeclass="docutils literal"><spanclass="pre">_2.11</code></span>, <codeclass="docutils literal"><spanclass="pre">_2.12</code></span>)</td>
<pclass="last">See the section <aclass="reference internal"href="datatypes.html#streams-developer-guide-serdes"><spanclass="std std-ref">Data Types and Serialization</span></a> for more information about Serializers/Deserializers.</p>
</div>
<p>Example <codeclass="docutils literal"><spanclass="pre">pom.xml</span></code> snippet when using Maven:</p>
<h2>Using Kafka Streams within your application code<aclass="headerlink"href="#using-kafka-streams-within-your-application-code"title="Permalink to this headline"></a></h2>
<p>You can call Kafka Streams from anywhere in your application code, but usually these calls are made within the <codeclass="docutils literal"><spanclass="pre">main()</span></code> method of
your application, or some variant thereof. The basic elements of defining a processing topology within your application
are described below.</p>
<p>First, you must create an instance of <codeclass="docutils literal"><spanclass="pre">KafkaStreams</span></code>.</p>
<ulclass="simple">
<li>The first argument of the <codeclass="docutils literal"><spanclass="pre">KafkaStreams</span></code> constructor takes a topology (either <codeclass="docutils literal"><spanclass="pre">StreamsBuilder#build()</span></code> for the
<aclass="reference internal"href="dsl-api.html#streams-developer-guide-dsl"><spanclass="std std-ref">DSL</span></a> or <codeclass="docutils literal"><spanclass="pre">Topology</span></code> for the
<aclass="reference internal"href="processor-api.html#streams-developer-guide-processor-api"><spanclass="std std-ref">Processor API</span></a>) that is used to define a topology.</li>
<li>The second argument is an instance of <codeclass="docutils literal"><spanclass="pre">java.util.Properties</span></code>, which defines the configuration for this specific topology.</li>
<spanclass="c1">// Use the builders to define the actual processing topology, e.g. to specify</span>
<spanclass="c1">// from which input topics to read, which stream operations (filter, map, etc.)</span>
<spanclass="c1">// should be called, and so on. We will cover this in detail in the subsequent</span>
<spanclass="c1">// sections of this Developer Guide.</span>
<spanclass="n">StreamsBuilder</span><spanclass="n">builder</span><spanclass="o">=</span><spanclass="o">...;</span><spanclass="c1">// when using the DSL</span>
<spanclass="n">Topology</span><spanclass="n">topology</span><spanclass="o">=</span><spanclass="o">...;</span><spanclass="c1">// when using the Processor API</span>
<spanclass="c1">// Use the configuration to tell your application where the Kafka cluster is,</span>
<spanclass="c1">// which Serializers/Deserializers to use by default, to specify security settings,</span>
<p>At this point, internal structures are initialized, but the processing is not started yet.
You have to explicitly start the Kafka Streams thread by calling the <codeclass="docutils literal"><spanclass="pre">KafkaStreams#start()</span></code> method:</p>
<divclass="highlight-java"><divclass="highlight"><pre><span></span><spanclass="c1">// Start the Kafka Streams threads</span>
For more information, see <aclass="reference internal"href="../architecture.html#streams_architecture_tasks"><spanclass="std std-ref">Stream Partitions and Tasks</span></a> and <aclass="reference internal"href="../architecture.html#streams-architecture-threads"><spanclass="std std-ref">Threading Model</span></a>.</p>
<p>To catch any unexpected exceptions, you can set an <codeclass="docutils literal"><spanclass="pre">java.lang.Thread.UncaughtExceptionHandler</span></code> before you start the
application. This handler is called whenever a stream thread is terminated by an unexpected exception:</p>
<divclass="highlight-java"><divclass="highlight"><pre><span></span><spanclass="c1">// Java 8+, using lambda expressions</span>