Update to KIP-328.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <guozhang@confluent.io>, Ted Yu <yuzhihong@gmail.com>, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>
Plus minor javadoc cleanups.
Reviewers: Matthias J. Sax <matthias@confluent.io>,Guozhang Wang <guozhang@confluent.io>, John Roesler <john@confluent.io>
Due to lack of conversion to kstream Predicate, existing filter method in KTable.scala would result in StackOverflowError.
This PR fixes the bug and adds testing for it.
Reviewers: Guozhang Wang <guozhang@confluent.io>, John Roesler <john@confluent.io>
Join in the Scala streams API is currently unusable in 2.0.0 as reported by @mowczare:
#5019 (comment)
This due to an overload of it with the same signature in the first curried parameter.
See compiler issue that didn't catch it: https://issues.scala-lang.org/browse/SI-2628
Reviewers: Debasish Ghosh <dghosh@acm.org>, Guozhang Wang <guozhang@confluent.io>, John Roesler <john@confluent.io>
This PR adds valueChangingOperation and mergeNode to StreamsGraphNode#toString
Reviewers: Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Updated two integration tests to use IntegrationTestUtils#waitUntilFinalKeyValueRecordsReceived to eliminate flaky test results.
Also, I updated IntegrationTestUtils#waitUntilFinalKeyValueRecordsReceived method to support having results with the same key present with different values.
For testing, I ran the current suite of streams tests.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
1) As titled, add a rewriteTopology that 1) sets application id, 2) maybe disable caching, 3) adjust for source KTable. This optimization can hence be applied for both DSL or PAPI generated Topology.
2) Defer the building of globalStateStores in rewriteTopology so that we can also disable caching. But we still need to build the state stores before InternalTopologyBuilder.build() since we should only build global stores once for all threads.
3) Added withCachingDisabled to StoreBuilder, it is a public API change.
4) [Optional] Fixed unit test config setting functionalities, and set the necessary config to shorten the unit test latency (now it reduces from 5min to 3.5min on my laptop).
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <john@confluent.io>, Bill Bejeck <bill@confluent.io>, Ted Yu <yuzhihong@gmail.com>
This PR adds the optimization of eliminating multiple repartition topics when the KStream resulting from a key-changing operation executes other methods using the new key and reduces the repartition topics to one.
Note that this PR leaves in place the optimization for re-using a source topic as a changelog topic for source KTable instances. I'll have another follow-up PR to move the source topic optimization to a method within InternalStreamsBuilder so it can be performed in the same area of the code.
Additionally, the current value of StreamsConfig.OPTIMIZE is all and we'll need to have another KIP to change the value to 2.1.
An integration test RepartitionOptimizingIntegrationTest which asserts the same results for an optimized topology with one repartition topic as the un-optimized version with four repartition topics.
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <john@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Part I of KIP-238:
* add grace period to Windows
* deprecate retention/maintainMs and segmentInterval from Windows
* record expired records in the store with a new metric
* record late record drops as a new metric instead of as a "skipped record"
Reviewers: Matthias J. Sax <matthias@confluent.io>, Bill Bejeck <bill@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
#5468 introduced a breaking API change that was actually avoidable. This PR re-introduces the old API as deprecated and alters the API introduced by #5468 to be consistent with the other methods
also, fixed misc syntax problems
- fix log statement in Topology Builder.
- addressed some warnings shown by Intellij
Reviewers: Viktor Somogyi <viktorsomogyi@gmail.com>, Satish Duggana <satishd@apache.org>, Matthias J. Sax <matthias@confluent.io>
While working on 4th PR, I noticed that I had missed adding stores via the graph vs. directly via the InternalStreamsBuilder. Probably ok to do so, but we should be consistent.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
While debugging the reported issue, I found that our current unit test lacks coverage to actually expose the underlying root cause.
Reviewers: Bill Bejeck <bill@confluent.io>, Matthias J. Sax <matthias@confluent.io>
1. In each iteration, decide if a task is processable if all of its partitions contains data, so it can decide which record to process next.
1.a Add one exception that, if the task indeed have data on some but not all of its partitions, we only consider as not processable for some finite round of iterations.
1.b Add a task-level metric to record whenever we are forced to process a task that is only "partially data available", since it may leads to non-determinism.
2. Break the main loop on put-raw-data and process-them. Since now not all data put into the queue would be processed completely within a single iteration.
3. NOTE that within an iteration, if a task has exhausted one of its queue it will still be processed, since we only update processable list once in each iteration, I'm improving on this on the follow-up part III PR.
4. Found and fixed a bug in metrics recording: the taskName and sensorName parameters were exchanged.
5. Optimized task stream time computation again since our current partition stream time reasoning has been simplified.
6. Added unit tests.
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <vvcephei@users.noreply.github.com>, Bill Bejeck <bbejeck@gmail.com>
The specific changes in this PR from the second PR include:
1. Changed the types of graph nodes to names conveying more context
2. Build the entire physical plan from the graph, after StreamsBuilder.build() is called.
Other changes are addressed directly as review comments on the PR.
Testing consists of using all existing streams tests to validate building the physical plan with graph
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <vvcephei@users.noreply.github.com>, Guozhang Wang <wangguoz@gmail.com>
Use delivery timeout instead of retries when possible and remove various TODOs associated with completion of KIP-91.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
* new minimum is 0, just like window size
* refactor tests to use smaller segment sizes as well
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
1. When we reinitialize the state store due to no CHECKPOINT with EOS turned on, we should update the checkpoint to consumer.seekToBeginnning() / consumer.position() to avoid falling into endless iterations.
2. Fixed a few other logic bugs around needsInitializing and needsRestoring.
Reviewers: Jason Gustafson <jason@confluent.io>, Bill Bejeck <bbejeck@gmail.com>
1. As titled and as described in comments.
2. Modified unit test slightly to insert for new keys in committed data to expose this issue.
Reviewers: Bill Bejeck <bill@confluent.io>, Matthias J. Sax <matthias@confluent.io>
This PR now justs removes the check in TaskPairs.hasNewPair that was causing the task assignment issue.
This was done as we need to further refine task assignment strategy and this approach needs to include the statefulness of tasks and is best done in one pass vs taking a "patchy" approach.
Updated current tests and ran locally
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
ZooKeeper client from version 3.4.13 doesn't handle connections to localhost very well. If ZooKeeper is started on 127.0.0.1 on a machine that has both ipv4 and ipv6 and a client is created using localhost rather than the IP address in the connection string, ZooKeeper client attempts to connect to ipv4 or ipv6 randomly with a fixed one second backoff if connection fails. Use 127.0.0.1 instead of localhost in streams tests to avoid intermittent test failures due to ZK client connection timeouts if ipv6 is chosen in consecutive address selections. Also add note to upgrade docs for 2.0.0.
Reviewers: Ismael Juma <github@juma.me.uk>, Matthias J. Sax <matthias@confluent.io>
1. At the beginning of assign, we first check that all the non-repartition source topics are included in the metadata. If not, we log an error at the leader and set an error in the Assignment userData bytes, indicating that leader cannot complete assignment and the error code would indicate the root cause of it.
2. Upon receiving the assignment, if the error is not NONE the streams will shutdown itself with a log entry re-stating the root cause interpreted from the error code.
Author: tedyu <yuzhihong@gmail.com>
Reviewers: Matthias J. Sax <mjsax@apache.org>, Guozhang Wang <wangguoz@gmail.com>
Closes#5322 from tedyu/trunk
1. Remove MinTimestampTracker and its TimestampTracker interface.
2. In RecordQueue, keep track of the head record (deserialized) while put the rest raw bytes records in the fifo queue, the head record as well as the partition timestamp will be updated accordingly.
Reviewers: Bill Bejeck <bill@confluent.io>, Matthias J. Sax <matthias@confluent.io>