Just a doc change
Author: John Eismeier <john.eismeier@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#4573 from jeis2497052/trunk
#5804 removed `Windows#segmentInterval`, but did not remove all references to it.
Author: John Roesler <john@confluent.io>
Reviewers: Damian Guy <damian.guy@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5806 from vvcephei/fix-missing-segment-interval
While working on the documentation updates I realized the Streams Scala API needs
to get updated for the addition of Grouped
Added a test for Grouped.scala ran all streams-scala tests and streams tests
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <john@confluent.io>, Guozhang Wang <guozhang@confluent.io>
Stop using current system time by default, as it introduces non-determinism.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>
Reviewers: Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>, Nikolay Izhikov <nizhikov@apache.org>
Satish Duggana <sduggana@hortonworks.com>, Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <john@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>
In recent PRs, we have been confused about the proper usage of
StatefulProcessorNode (#5731 , #5737 )
This change disambiguates it.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>
In unrelated recent work, I noticed some warnings about the missing type parameters on ProcessorParameters.
While investigating it, it seems like there was a bug in the creation of repartition topics.
Reviewers: Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>
Reviewers: Johne Roesler <john@confluent.io>, Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>
KIP-372 (allow naming all internal topics) was designed and developed concurrently with suppression.
Since suppression introduces a new internal topic, it also needs to be nameable.
Reviewers: Guozhang Wang <guozhang@confluent.io>, Matthias J. Sax <matthias@confluent.io>
This is Part 4 of suppression (durability)
Part 1 was #5567 (the API)
Part 2 was #5687 (the tests)
Part 3 was #5693 (in-memory buffering)
Implement a changelog for the suppression buffer so that the buffer state may be recovered on restart or recovery.
As of this PR, suppression is suitable for general usage.
Reviewers: Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>, Matthias J. Sax <matthias@confluent.io>
The Suppression buffer stores the full record context, not just the key and value,
so its changelog/restore loop will also need to preserve this information.
This change is a precondition to that, creating an option to register a
state restore callback to receive the full consumer record.
Reviewers: Bill Bejeck <bill@confluent.io>, Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
This is Part 2 of suppression.
Part 1 was #5567
In an effort to control the scope of the review, this PR is just the tests for buffered suppression.
Reviewers: Bill Bejeck <bill@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Part 1 of the suppression API.
* add the DSL suppress method and config objects
* add the processor, but only in "identity" mode (i.e., it will forward only if the suppression spec says to forward immediately)
* add tests
Reviewers: Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
This patch implements KIP-336. It adds a default implementation to the Serializer/Deserializer interface to support the use of headers and it deprecates the ExtendedSerializer and ExtendedDeserializer interfaces for later removal.
Reviewers: Satish Duggana <sduggana@hortonworks.com>, John Roesler <john@confluent.io>, Jason Gustafson <jason@confluent.io>
What changes were proposed in this pull request?
atLeast(0) in StreamsConfig, ProducerConfig and ConsumerConfig were replaced by SEND_BUFFER_LOWER_BOUND and RECEIVE_BUFFER_LOWER_BOUND from CommonClientConfigs.
How was this patch tested?
Three unit tests were added to KafkaStreamsTest
Reviewers: Guozhang Wang <guozhang@confluent.io>, John Roesler <john@confluent.io>, Matthias J. Sax <mjsax@apache.org>
Increasing the number of unique keys, to increase likelihood that the test exposes KAFKA-7192.
Reviewers: Apurva Mehta <apurva@confluent.io>, Guozhang Wang <guozhang@confluent.io>, Bill Bejeck <bill@confluent.io>, John Roesler <john@confluent.io>
Removed ignore annotations from the upgrade tests. This PR includes the following changes for updating the upgrade tests:
* Uploaded new versions 0.10.2.2, 0.11.0.3, 1.0.2, 1.1.1, and 2.0.0 (in the associated scala versions) to kafka-packages
* Update versions in version.py, Dockerfile, base.sh
* Added new versions to StreamsUpgradeTest.test_upgrade_downgrade_brokers including version 2.0.0
* Added new versions StreamsUpgradeTest.test_simple_upgrade_downgrade test excluding version 2.0.0
* Version 2.0.0 is excluded from the streams upgrade/downgrade test as StreamsConfig needs an update for the new version, requiring a KIP. Once the community votes the KIP in, a minor follow-up PR can be pushed to add the 2.0.0 version to the upgrade test.
* Fixed minor bug in kafka-run-class.sh for classpath in upgrade/downgrade tests across versions.
* Follow on PRs for 0.10.2x, 0.11.0x, 1.0.x, 1.1.x, and 2.0.x will be pushed soon with the same updates required for the specific version.
Reviewers: Eno Thereska <eno.thereska@gmail.com>, John Roesler <vvcephei@users.noreply.github.com>, Guozhang Wang <wangguoz@gmail.com>, Matthias J. Sax <matthias@confluent.io>
Reviewers: Guozhang Wang <guozhang@confluent.io>, John Roessler <john@confluent.io>, Bill Bejeck <bill@confluent.io>, Eno Thereska <enother@amazon.com>
Currently, scala.Serdes.String, for example, invokes Serdes.String() once and caches the result.
However, the implementation of the String serde has a non-empty configure method that is variant in whether it's used as a key or value serde. So we won't get correct execution if we create one serde and use it for both keys and values.
Reviewers: Bill Bejeck <bill@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
* Refactor the StreamThread main loop, in the following:
1. Fetch from consumer and enqueue data to tasks.
2. Check if any tasks should be enforced process.
3/ Loop over processable tasks and process them for N iterations, and then check for 1) commit, 2) punctuate, 3) need to call consumer.poll
4. Even if there is not data to process in this iteration, still need to check if commit / punctuate is needed
5. Finally, try update standby tasks.
*Add an optimization to only commit when it is needed (i.e. at least some process() or punctuate() was triggered since last commit).
*Found and fixed a ProducerFencedException scenario: producer.send() call would never throw a ProducerFencedException directly, but it may throw a KafkaException whose "cause" is a ProducerFencedException.
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <john@confluent.io>, Bill Bejeck <bill@confluent.io>
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <guozhang@confluent.io>, Bill Bejeck <bill@confluent.io>, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>