We have not had great experience with listeners. They make the code harder to understand because they result in indirectly maintained circular dependencies. Often this leads to tricky deadlocks when we try to introduce locking. We were able to remove the Metadata listener in KAFKA-7831. Here we do the same for the listener in SubscriptionState.
Reviewers: Viktor Somogyi-Vass <viktorsomogyi@gmail.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
This patch updates the InitProducerId request API to use the generated sources. It also fixes a small bug in the DescribeAclsRequest class where we were using the wrong api key.
Reviewers: Mickael Maison <mickael.maison@gmail.com>, Colin McCabe <cmccabe@apache.org>
Due to KAFKA-8159, Streams will throw an unchecked exception when a caching layer or in-memory underlying store is queried over a range of keys from negative to positive. We should add a check for this and log it then return an empty iterator (as the RocksDB stores happen to do) rather than crash
Reviewers: Bruno Cadonna <bruno@confluent.io> Bill Bejeck <bbejeck@gmail.com>
Protocol compatibility can be facilitated if a Struct, that has been defined as an extension of a previous Struct by adding fields at the end of the older version, can read a message of an older version by ignoring the absence of the missing new fields. Reading the missing fields should be allowed by the definition of these fields (they have to be nullable) when supported by the schema.
Reviewers: David Arthur <mumrah@gmail.com>, Randall Hauch <rhauch@gmail.com>, Jason Gustafson <jason@confluent.io>
Fixed the ConnectClusterStateImpl.connectors() method and throw an exception on timeout. Added unit test.
Author: Chris Egerton <chrise@confluent.io>
Reviewers: Magesh Nandakumar <magesh.n.kumar@gmail.com>, Robert Yokota <rayokota@gmail.com>, Arjun Satish <wicknicks@users.noreply.github.com>, Konstantine Karantasis <konstantine@confluent.io>, Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#6384 from C0urante:kafka-8058
ConsumerBounceTest redundantly executes a couple test cases which were included in the abstract class `BaseConsumerTest`. We should try to keep a cleaner separation of testing logic and utility logic so that this does not happen (the build time is long enough without doing unnecessary work). This PR moves the cluster initialization and consumer utilities out of BaseConsumerTest and into a new class AbstractConsumerTest. We then let ConsumerBounceTest extend AbstractConsumerTest.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
This PR should help address the flakiness in the ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup test (https://issues.apache.org/jira/browse/KAFKA-7965). I tested this locally and have verified it significantly reduces flakiness - 25/25 tests now pass. Running the test 25 times in trunk, I'd get `18/25` passes.
It does so by reusing the less-flaky consumer integration testing functionality inside `BaseConsumerTest`. Most notably, the test now makes use of the `ConsumerAssignmentPoller` class - each consumer now polls non-stop rather than the more batch-oriented polling we had in `ConsumerBounceTest#waitForRebalance()`.
Reviewers: Jason Gustafson <jason@confluent.io>
Ensure that modification time is checked against the file used to create the SSLContext that is in-use so that SSLContext is updated whenever file is modified and a config update request is received.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Each separate thread should have its own throttle, so that it can sleep
for an appropriate amount of time when needed.
ConnectionStressWorker should avoid recalculating the status after
shutting down the runnables. Otherwise, if one runnable is slow to
stop, it will skew the average down in a way that doesn't reflect
reality. This change moves the status calculation into a separate
periodic runnable that gets shut down cleanly before the other ones.
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Gwen Shapira, Stanislav Kozlovski
Closes#6533 from cmccabe/fix_connection_stress_worker
Since we now call poll during restore, we can decrease the timeout
to a reasonable value, which should help Streams make progress if
threads get stuck.
Reviewers: Guozhang Wang <wangguoz@gmail.com>, Bill Bejeck <bbejeck@gmail.com>
Removed TOC entry in Streams Developer Guide for Avro, since we have no content for this
PR on kafka-site: apache/kafka-site#195
Reviewers: Guozhang Wang <wangguoz@gmail.com>
This change adds waits for metadata updates after killing the broker in order to make the tests more stable.
Author: Viktor Somogyi-Vass <viktorsomogyi@gmail.com>
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Closes#6505 from viktorsomogyi/flaky-min-isr-test
`CYGINW` probably should be `CYGWIN`
*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*
*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*
Author: Michael Gruben Trejo <mgrubentrejo@linkedin.com>
Reviewers: Gwen Shapira
Closes#6523 from mgrubent/patch-1
Optimize ConnectionStressWorker by avoiding creating a new
ChannelBuilder each time we want to open a new connection.
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Gwen Shapira
Closes#6518 from cmccabe/optimize-connection-stress-worker
Since we've added Kafka Streams optimizations in 2.1 we need to move the optimization for source KTable nodes (use source topic as changelog) to the optimization framework.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Though out the tutorial, the name of the input topic that was created is `streams-plaintext-input`. However, this was mistaken at some point in the tutorial and changed to `streams-wordcount-input`.
This patch is to adjust that. Thanks.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
doneFuture is supposed to be completed with an empty string (meaning success) or a non-empty string which is the error message. Currently, due to exception.getMessage sometimes returning null or an empty string, this is not working correctly. This patch fixes that.
Reviewers: David Arthur <mumrah@gmail.com>
This PR is a follow-up of #6174, which handles doFilter / doMapValues / doTransformValues methods.
Reviewers: Bill Bejeck <bill@confluent.io>, Guozhang Wang <guozhang@confluent.io>
A broken can have more than one instance of ZooKeeperClient. For example, SimpleAclAuthorizer creates a separate ZooKeeperClient instance when configured.
This commit makes it possible to optionally specify the name for the ZooKeeperClient instance. The name is specified only for a broker's ZooKeeperClient instances, but not for commands' and tests'.
Reviewers: Jun Rao <junrao@gmail.com>
This patch adds a TimeIntervalTransactionsGenerator class which enables the Trogdor ProduceBench worker to commit transactions based on a configurable millisecond time interval.
Also, we now handle 409 create task responses in the coordinator command-line client by printing a more informative message
Reviewers: Colin P. McCabe <cmccabe@apache.org>
ToString functions must not get a NullPointException. read() functions
must properly translate a negative array length to a null field.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Extend Connect's integration test framework to add or remove workers to EmbeddedConnectCluster, and choosing whether to fail the test on ungraceful service shutdown. Also added more JavaDoc and other minor improvements.
Author: Konstantine Karantasis <konstantine@confluent.io>
Reviewers: Arjun Satish <arjun@confluent.io>, Randall Hauch <rhauch@gmail.com>
Closes#6342 from kkonstantine/KAFKA-8014