From e79d9af3cfbb8884e00424f84f3c687114497998 Mon Sep 17 00:00:00 2001
From: Dongjoon Hyun 2.1 Producer API
diff --git a/docs/connect.html b/docs/connect.html
index dc6ad6e9eb5..88b8c2b5c34 100644
--- a/docs/connect.html
+++ b/docs/connect.html
@@ -108,7 +108,7 @@ This guide describes how developers can write new connectors for Kafka Connect t
To copy data between Kafka and another system, users create a Connector
for the system they want to pull data from or push data to. Connectors come in two flavors: SourceConnectors
import data from another system (e.g. JDBCSourceConnector
would import a relational database into Kafka) and SinkConnectors
export data (e.g. HDFSSinkConnector
would export the contents of a Kafka topic to an HDFS file).
-Connectors
do not perform any data copying themselves: their configuration describes the data to be copied, and the Connector
is responsible for breaking that job into a set of Tasks
that can be distributed to workers. These Tasks
also come in two corresponding flavors: SourceTask
and SinkTask
.
+Connectors
do not perform any data copying themselves: their configuration describes the data to be copied, and the Connector
is responsible for breaking that job into a set of Tasks
that can be distributed to workers. These Tasks
also come in two corresponding flavors: SourceTask
and SinkTask
.
With an assignment in hand, each Task
must copy its subset of the data to or from Kafka. In Kafka Connect, it should always be possible to frame these assignments as a set of input and output streams consisting of records with consistent schemas. Sometimes this mapping is obvious: each file in a set of log files can be considered a stream with each parsed line forming a record using the same schema and offsets stored as byte offsets in the file. In other cases it may require more effort to map to this model: a JDBC connector can map each table to a stream, but the offset is less clear. One possible mapping uses a timestamp column to generate queries incrementally returning new data, and the last queried timestamp can be used as the offset.
@@ -242,11 +242,11 @@ public List<SourceRecord> poll() throws InterruptedException {
Again, we've omitted some details, but we can see the important steps: the poll()
method is going to be called repeatedly, and for each call it will loop trying to read records from the file. For each line it reads, it also tracks the file offset. It uses this information to create an output SourceRecord
with four pieces of information: the source partition (there is only one, the single file being read), source offset (byte offset in the file), output topic name, and output value (the line, and we include a schema indicating this value will always be a string). Other variants of the SourceRecord
constructor can also include a specific output partition and a key.
-Note that this implementation uses the normal Java InputStream
interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic poll()
interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and is compatible with older versions of Java.
+Note that this implementation uses the normal Java InputStream
interface and may sleep if data is not available. This is acceptable because Kafka Connect provides each task with a dedicated thread. While task implementations have to conform to the basic poll()
interface, they have a lot of flexibility in how they are implemented. In this case, an NIO-based implementation would be more efficient, but this simple approach works, is quick to implement, and is compatible with older versions of Java.
Sink Tasks
-The previous section described how to implement a simple SourceTask
. Unlike SourceConnector
and SinkConnector
, SourceTask
and SinkTask
have very different interfaces because SourceTask
uses a pull interface and SinkTask
uses a push interface. Both share the common lifecycle methods, but the SinkTask
interface is quite different:
+The previous section described how to implement a simple SourceTask
. Unlike SourceConnector
and SinkConnector
, SourceTask
and SinkTask
have very different interfaces because SourceTask
uses a pull interface and SinkTask
uses a push interface. Both share the common lifecycle methods, but the SinkTask
interface is quite different:
public abstract class SinkTask implements Task {
@@ -257,17 +257,17 @@ public abstract void put(Collection<SinkRecord> records);
public abstract void flush(Map<TopicPartition, Long> offsets);
-The SinkTask
documentation contains full details, but this interface is nearly as simple as the the SourceTask
. The put()
method should contain most of the implementation, accepting sets of SinkRecords
, performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering will be useful so an entire batch of records can be sent at once, reducing the overhead of inserting events into the downstream data store. The SinkRecords
contain essentially the same information as SourceRecords
: Kafka topic, partition, offset and the event key and value.
+The SinkTask
documentation contains full details, but this interface is nearly as simple as the the SourceTask
. The put()
method should contain most of the implementation, accepting sets of SinkRecords
, performing any required translation, and storing them in the destination system. This method does not need to ensure the data has been fully written to the destination system before returning. In fact, in many cases internal buffering will be useful so an entire batch of records can be sent at once, reducing the overhead of inserting events into the downstream data store. The SinkRecords
contain essentially the same information as SourceRecords
: Kafka topic, partition, offset and the event key and value.
-The flush()
method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The offsets
parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once
-delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the flush()
operation atomically commits the data and offsets to a final location in HDFS.
+The flush()
method is used during the offset commit process, which allows tasks to recover from failures and resume from a safe point such that no events will be missed. The method should push any outstanding data to the destination system and then block until the write has been acknowledged. The offsets
parameter can often be ignored, but is useful in some cases where implementations want to store offset information in the destination store to provide exactly-once
+delivery. For example, an HDFS connector could do this and use atomic move operations to make sure the flush()
operation atomically commits the data and offsets to a final location in HDFS.
Resuming from Previous Offsets
-The SourceTask
implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit process is completely automated by the framework, but only the connector knows how to seek back to the right position in the input stream to resume from that location.
+The SourceTask
implementation included a stream ID (the input filename) and offset (position in the file) with each record. The framework uses this to commit offsets periodically so that in the case of a failure, the task can recover and minimize the number of events that are reprocessed and possibly duplicated (or to resume from the most recent offset if Kafka Connect was stopped gracefully, e.g. in standalone mode or due to a job reconfiguration). This commit process is completely automated by the framework, but only the connector knows how to seek back to the right position in the input stream to resume from that location.
-To correctly resume upon startup, the task can use the SourceContext
passed into its initialize()
method to access the offset data. In initialize()
, we would add a bit more code to read the offset (if it exists) and seek to that position:
+To correctly resume upon startup, the task can use the SourceContext
passed into its initialize()
method to access the offset data. In initialize()
, we would add a bit more code to read the offset (if it exists) and seek to that position:
stream = new FileInputStream(filename);
@@ -285,7 +285,7 @@ Of course, you might need to read many keys for each of the input streams. The <
Kafka Connect is intended to define bulk data copying jobs, such as copying an entire database rather than creating many jobs to copy each table individually. One consequence of this design is that the set of input or output streams for a connector can vary over time.
-Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the
ConnectorContext
object that reconfiguration is necessary. For example, in a SourceConnector
:
+Source connectors need to monitor the source system for changes, e.g. table additions/deletions in a database. When they pick up changes, they should notify the framework via the ConnectorContext
object that reconfiguration is necessary. For example, in a SourceConnector
:
@@ -293,11 +293,11 @@ if (inputsChanged())
this.context.requestTaskReconfiguration();
-The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the SourceConnector
this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.
+The framework will promptly request new configuration information and update the tasks, allowing them to gracefully commit their progress before reconfiguring them. Note that in the SourceConnector
this monitoring is currently left up to the connector implementation. If an extra thread is required to perform this monitoring, the connector must allocate it itself.
-Ideally this code for monitoring changes would be isolated to the Connector
and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the Task
encounters the issue before the Connector
, which will be common if the Connector
needs to poll for changes, the Task
will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.
+Ideally this code for monitoring changes would be isolated to the Connector
and tasks would not need to worry about them. However, changes can also affect tasks, most commonly when one of their input streams is destroyed in the input system, e.g. if a table is dropped from a database. If the Task
encounters the issue before the Connector
, which will be common if the Connector
needs to poll for changes, the Task
will need to handle the subsequent error. Thankfully, this can usually be handled simply by catching and handling the appropriate exception.
-SinkConnectors
usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. SinkTasks
should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple SinkTasks
seeing a new input stream for the first time and simultaneoulsy trying to create the new resource. SinkConnectors
, on the other hand, will generally require no special code for handling a dynamic set of streams.
+SinkConnectors
usually only have to handle the addition of streams, which may translate to new entries in their outputs (e.g., a new database table). The framework manages any changes to the Kafka input, such as when the set of input topics changes because of a regex subscription. SinkTasks
should expect new input streams, which may require creating new resources in the downstream system, such as a new table in a database. The trickiest situation to handle in these cases may be conflicts between multiple SinkTasks
seeing a new input stream for the first time and simultaneously trying to create the new resource. SinkConnectors
, on the other hand, will generally require no special code for handling a dynamic set of streams.
Working with Schemas
@@ -305,7 +305,7 @@ The FileStream connectors are good examples because they are simple, but they al
To create more complex data, you'll need to work with the Kafka Connect data
API. Most structured records will need to interact with two classes in addition to primitive types: Schema
and Struct
.
-The API documentation provides a complete reference, but here is a simple example creating a Schema
and Struct
:
+The API documentation provides a complete reference, but here is a simple example creating a Schema
and Struct
:
Schema schema = SchemaBuilder.struct().name(NAME)
@@ -322,7 +322,7 @@ Struct struct = new Struct(schema)
If you are implementing a source connector, you'll need to decide when and how to create schemas. Where possible, you should avoid recomputing them as much as possible. For example, if your connector is guaranteed to have a fixed schema, create it statically and reuse a single instance.
-However, many connectors will have dynamic schemas. One simple example of this is a database connector. Considering even just a single table, the schema will not be predefined for the entire connector (as it varies from table to table). But it also may not be fixed for a single table over the lifetime of the connector since the user may execute an
ALTER TABLE
command. The connector must be able to detect these changes and react appropriately.
+However, many connectors will have dynamic schemas. One simple example of this is a database connector. Considering even just a single table, the schema will not be predefined for the entire connector (as it varies from table to table). But it also may not be fixed for a single table over the lifetime of the connector since the user may execute an ALTER TABLE
command. The connector must be able to detect these changes and react appropriately.
Sink connectors are usually simpler because they are consuming data and therefore do not need to create schemas. However, they should take just as much care to validate that the schemas they receive have the expected format. When the schema does not match -- usually indicating the upstream producer is generating invalid data that cannot be correctly translated to the destination system -- sink connectors should throw an exception to indicate this error to the system.
diff --git a/docs/implementation.html b/docs/implementation.html
index ecd99e708ec..be81227c906 100644
--- a/docs/implementation.html
+++ b/docs/implementation.html
@@ -90,7 +90,7 @@ class SimpleConsumer {
* Get a list of valid offsets (up to maxSize) before the given time.
* The result is a list of offsets, in descending order.
* @param time: time in millisecs,
- * if set to OffsetRequest$.MODULE$.LATIEST_TIME(), get from the latest offset available.
+ * if set to OffsetRequest$.MODULE$.LATEST_TIME(), get from the latest offset available.
* if set to OffsetRequest$.MODULE$.EARLIEST_TIME(), get from the earliest offset available.
*/
public long[] getOffsetsBefore(String topic, int partition, long time, int maxNumOffsets);
@@ -292,7 +292,7 @@ Since the broker registers itself in ZooKeeper using ephemeral znodes, this regi
-/brokers/topics/[topic]/[0...N] --> nPartions (ephemeral node) +/brokers/topics/[topic]/[0...N] --> nPartitions (ephemeral node)
diff --git a/docs/migration.html b/docs/migration.html index 2da6a7e26ac..5240d866433 100644 --- a/docs/migration.html +++ b/docs/migration.html @@ -27,7 +27,7 @@
> bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --list @@ -156,7 +156,7 @@ test-consumer-group test-foo 0 1-When youre using the new consumer-groups API where the broker handles coordination of partition handling and rebalance, you can manage the groups with the "--new-consumer" flags: +When you're using the new consumer-groups API where the broker handles coordination of partition handling and rebalance, you can manage the groups with the "--new-consumer" flags:
> bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server broker1:9092 --list