At the time of this writing there are 6 schemas in kafka APIs with no fields - 3
versions each of LIST_GROUPS and API_VERSIONS.
When reading instances of these schemas off the wire there's little point in
returning a unique Struct object (or a unique values array inside that Struct)
since there is no payload.
Reviewers: Ismael Juma <ismael@juma.me.uk>
`MirrorTaskConfig` class mutates the `ConfigDef` by defining additional properties, which leads to a potential `ConcurrentModificationException` during worker configuration validation and unintended inclusion of those new properties in the `ConfigDef` for the connectors which in turn is then visible via the REST API's `/connectors/{name}/config/validate` endpoint.
The fix here is a one-liner that just creates a copy of the `ConfigDef` before defining new properties.
Reviewers: Ryanne Dolan <ryannedolan@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>
Currently, if a connector is deleted, its task configurations will remain in the config snapshot tracked by the KafkaConfigBackingStore. This causes issues with incremental cooperative rebalancing, which utilizes that config snapshot to determine which connectors and tasks need to be assigned across the cluster. Specifically, it first checks to see which connectors are present in the config snapshot, and then, for each of those connectors, queries the snapshot for that connector's task configs.
The lifecycle of a connector is for its configuration to be written to the config topic, that write to be picked up by the workers in the cluster and trigger a rebalance, the connector to be assigned to and started by a worker, task configs to be generated by the connector and then written to the config topic, that write to be picked up by the workers in the cluster and trigger a second rebalance, and finally, the tasks to be assigned to and started by workers across the cluster.
There is a brief period in between the first time the connector is started and when the second rebalance has completed during which those stale task configs from a previously-deleted version of the connector will be used by the framework to start tasks for that connector. This fix aims to eliminate that window by preemptively clearing the task configs from the config snapshot for a connector whenever it has been deleted.
An existing unit test is modified to verify this behavior, and should provide sufficient guarantees that the bug has been fixed.
Reviewers: Nigel Liang <nigel@nigelliang.com>, Konstantine Karantasis <konstantine@confluent.io>
The class claims to be immutable, but there are some mutable features of this class.
Increase the immutability of it and add a little cleanup:
* Pre-initialize size of ArrayList
* Remove superfluous syntax
* Use ArrayList instead of LinkedList since the list is created once
Reviewers: Ron Dagostino <rdagostino@confluent.io>, Konstantine Karantasis <konstantine@confluent.io>
Segmented state stores turn on bulk loading of the underlying RocksDB
when restoring. This is correct for segmented state stores that
are in restore mode on active tasks and the onRestoreStart() and
onRestoreEnd() in RocksDBSegmentsBatchingRestoreCallback take care
of toggling bulk loading mode on and off. However, restoreAll()
in RocksDBSegmentsBatchingRestoreCallback might also turn on bulk loading
mode. When this happens on a stand-by task bulk loading mode is never
turned off. That leads to steadily increasing open file decriptors
in RocksDB because in bulk loading mode RocksDB creates continuously new
files but never compacts them (which is the intended behaviour).
Reviewers: A. Sophie Blee-Goldman <sophie@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
* MINOR: Fix typo in RecordAccumulator
* MINOR: Fix typo in several files
Reviewers: Ron Dagostino <rdagostino@confluent.io>, Konstantine Karantasis <konstantine@confluent.io>
Standby task could also at risk of getting into illegal state when not being closed during HandleLostAll:
1. The standby task was initializing as CREATED state, and task corrupted exception was thrown from registerStateStores
2. The task corrupted exception was caught, and do a non-affected task commit
3. The task commit failed due to task migrated exception
4. The handleLostAll didn't close the standby task, leaving it as CREATED state
5. Next rebalance complete, the same task was assigned back as standby task.
6. Illegal Argument exception caught as state store already registered
Reviewers: A. Sophie Blee-Goldman <ableegoldman@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
As stated, we couldn't wait for handleRebalanceComplete in the case of handleLostAll, as we already closed the active task as dirty, and could potentially require its offset in the next thread.runOnce call.
Co-authored-by: Guozhang Wang <wangguoz@gmail.com>
Reviewers: A. Sophie Blee-Goldman <ableegoldman@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
Fixes EmbeddedKafkaCluster.deleteTopicAndWait for use with kafka_2.13
Reviewers: Boyang Chen <boyang@confluent.io>, Chia-Ping Tsai <chia7712@gmail.com>, John Roesler <vvcephei@apache.org>
fix broken links
rephrase a sentence
update the version number
Reviewers: Sophie Blee-Goldman <sophie@confluent.io>, Boyang Chen <boyang@confluent.io>, Bill Bejeck <bbejeck@apache.org>
This is a prerequisite for KAFKA-9501 and will also be useful for KAFKA-9603
There should be no logical changes here: the main difference is the removal of StandbyContextImpl in preparation for contexts to transition between active and standby.
Also includes some minor cleanup, eg pulling the ReadOnly/ReadWrite decorators out into a separate file.
Reviewers: Bruno Cadonna <bruno@confluent.io>, John Roesler <vvcephei@apache.org>, Guozhang Wang <wangguoz@gmail.com>
This fixes critical bugs in Gradle 6.4:
* Regression: Different daemons are used between IDE and CLI builds for the same project
* Regression: Main-Class attribute always added to jar manifest when using application plugin
* Fix potential NPE if code is executed concurrently
More details: https://github.com/gradle/gradle/releases/tag/v6.4.1
Reviewers: Manikumar Reddy <manikumar@confluent.io>
* If two exceptions are thrown the `closePartitions` exception is suppressed
* Add unit tests that throw exceptions in put and close to verify that
the exceptions are propagated and suppressed appropriately out of WorkerSinkTask::execute
Reviewers: Chris Egerton <chrise@confluent.io>, Nigel Liang <nigel@nigelliang.com>, Konstantine Karantasis <konstantine@confluent.io>
The store's registered callback could also be a restore listener, in which case it should be triggered along with the user specified global listener as well.
Reviewers: Boyang Chen <boyang@confluent.io>, Matthias J. Sax <matthias@confluent.io>
Added check if the transformation is abstract. If so throw an error message with guidance for the user. Ensure that the child classes are also not abstract.
Author: Jeremy Custenborder <jcustenborder@gmail.com>
Reviewer: Randall Hauch <rhauch@gmail.com>
It fixes 30 issues, including third party CVE fixes, several leader-election
related fixes and a compatibility issue with applications built against earlier
3.5 client libraries (by restoring a few non public APIs).
See ZooKeeper 3.5.8 Release Notes for details: https://zookeeper.apache.org/doc/r3.5.8/releasenotes.html
Reviewers: Manikumar Reddy <manikumar@confluent.io>
A small hotfix to avoid an extra probing rebalance the first time an application is launched.
This should particularly improve the testing experience.
Reviewer: Matthias J. Sax <matthias@confluent.io>, John Roesler <vvcephei@apache.org>
Validate that the assignment is always balanced wrt:
* active assignment balance
* stateful assignment balance
* task-parallel balance
Reviewers: Bruno Cadonna <bruno@confluent.io>, A. Sophie Blee-Goldman <sophie@confluent.io>
Add actual values instead of just passing null in unit tests that check the behavior of the InsertField SMT when trying to insert a field that takes its value from the Kafka record timestamp.
Reviewers: Chris Egerton <chrise@confluent.io>, Konstantine Karantasis <konstantine@confluent.io>
Prior to KAFKA-8106, we allowed the v0 and v1 message formats to contain non-consecutive inner offsets. Inside `LogValidator`, we would detect this case and rewrite the batch. After KAFKA-8106, we changed the logic to raise an error in the case of the v1 message format (v0 was still expected to be rewritten). This caused an incompatibility for older clients which were depending on the looser validation. This patch reverts the old logic of rewriting the batch to fix the invalid inner offsets.
Note that the v2 message format has always had stricter validation. This patch also adds a test case for this.
Reviewers: José Armando García Sancio <jsancio@users.noreply.github.com>, Ismael Juma <ismael@juma.me.uk>
Drops idempotent updates from KTable source operators.
Specifically, drop updates in which the value is unchanged,
and the timestamp is the same or larger.
Implements: KIP-557
Reviewers: Bruno Cadonna <bruno@confluent.io>, John Roesler <vvcephei@apache.org>
* Fix describeConfigs and alterConfigs not to invoke authorizer more
than once
* Add tests to KafkaApisTest to verify the fixes
* Rename `filterAuthorized` to `filterByAuthorized`
* Tweak `filterByAuthorized` to take resources instead of resource
names and improve implementation
* Introduce `partitionMapByAuthorized` and `partitionSeqByAuthorized`
and simplify code by using it
* Replace List with Seq in some AdminManager methods
* Remove stray `println` in `KafkaApisTest`
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
We spotted a case in the soak test where a standby task could be in CREATED state during commit, which causes an illegal state exception. To prevent this from happening, the solution is to always enforce a state check.
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <vvcephei@apache.org>, Guozhang Wang <wangguoz@gmail.com>
* Use `forEach` instead of `asScala.foreach` for Java Iterables.
* Use `ifPresent` instead of `asScala.foreach` for Java Optionals.
* Use `forEach` instead of `entrySet.forEach` for Java maps.
* Keep `asScala.foreach` for `Properties` as the Scala implementation
has a better interface (keys and values are of type `String`).
* A few clean-ups: unnecessary `()`, `{}`, `new`, etc.
Reviewers: Manikumar Reddy <manikumar@confluent.io>