As the feature freeze approaches, we should support `2.5` as the inter.broker.protocol.version value. There are no new APIs so far, so `2.5` is effectively equivalent to `2.4`.
This PR implements `default.api.timeout.ms` as documented by KIP-533. This is a rebased version of #6913 with some additional test cases and small cleanups.
Reviewers: David Arthur <mumrah@gmail.com>
Co-authored-by: huxi <huxi_2b@hotmail.com>
Attempt to load multiple IBM classes but fallback on loading the Sun class if the IBM one is not found.
Reviewers: Mickael Maison <mickael.maison@gmail.com>, Ismael Juma <ismael@juma.me.uk>
`emit.heartbeats.enabled` and `emit.checkpoints.enabled` are supposed to
be the knobs to control if the heartbeat message or checkpoint message
will be sent or not to the topics respectively. In our experiments,
setting them to false will not suspend the activity in their SourceTasks,
e.g. MirrorHeartbeatTask, MirrorCheckpointTask.
The observations are, when setting those knobs to false, huge volume of
`SourceRecord` are being sent without interval, causing significantly high
CPU usage and GC time of MirrorMaker 2 instance and congesting the single
partition of the heartbeat topic and checkpoint topic.
The proposed fix in the following PR is to (1) explicitly check if `interval`
is set to negative (e.g. -1), when the `emit.heartbeats.enabled` or
`emit.checkpoints.enabled` is off. (2) if `interval` is indeed set to negative,
no task is created.
Reviewers: Mickael Maison <mickael.maison@gmail.com>, Ryanne Dolan <ryannedolan@gmail.com>
This feature corresponds to KIP-558 and extends how the internal status topic (set via `status.storage.topic` distributed worker config) is used to include information that allows Kafka Connect to keep track which topics a connector is using.
The set of topics a connector is actively using, is exposed via a new endpoint that is added to the REST API of Connect workers.
* A `GET /connectors/{name}/topics` request will return the set of topics that have been recorded as active since a connector started or since the set of topics was reset for this connector.
An additional endpoints allows users to reset the set of active topics for a connector via the second endpoint that this feature is adding:
* A `PUT /connectors/{name}/topics/reset` request clears the set of active topics. An operator may enable or disable this feature by setting `topic.tracking.enable` (true by default).
The `topic.tracking.enable` worker config property (true by default) allows an operator to enable/disable the entire feature. Or if the feature is enabled, the `topic.tracking.allow.reset` worker config property (true by default) allows an operator to control whether reset requests submitted to the Connect REST API are allowed.
Author: Konstantine Karantasis <konstantine@confluent.io>
Reviewer: Randall Hauch <rhauch@gmail.com>
Also: Improve error message, Add test, Minor code quality fixes
Verified that the test fails if the broker default for max message bytes is lower or higher than the currently set value.
Reviewers: Andrew Choi <andchoi@linkedin.com>, Viktor Somogyi <viktorsomogyi@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
* lz4: fixes identified by oss-fuzz
* jetty: fixes a few recent regressions
* powermock: better support for Java 12+
* zstd-jni: minor fixes
* httpclient: minor fixes
* spotless-plugin: minor fixes
* jmh: minor fixes
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>
Currently, when a dynamic change is made to the broker-level default log configuration, existing log configs will be recreated with an empty overridden configs. In such case, when updating dynamic broker configs a second round, the topic-level configs are lost. This can cause unexpected data loss, for example, if the cleanup policy changes from "compact" to "delete."
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>, Jason Gustafson <jason@confluent.io>
This ticket shall improve two aspects of the retrieval of sensors:
https://issues.apache.org/jira/browse/KAFKA-9152
Currently, when a sensor is retrieved with *Metrics.*Sensor() (e.g. ThreadMetrics.createTaskSensor()) after it was created with the same method *Metrics.*Sensor(), the sensor is added again to the corresponding queue in Sensors (e.g. threadLevelSensors) in StreamsMetricsImpl. Those queues are used to remove the sensors when removeAllLevelSensors() is called. Having multiple times the same sensors in this queue is not an issue from a correctness point of view. However, it would reduce the footprint to only store a sensor once in those queues.
When a sensor is retrieved, the current code attempts to create a new sensor and to add to it again the corresponding metrics. This could be avoided.
Both aspects could be improved by checking whether a sensor already exists by calling getSensor() on the Metrics object and checking the return value.
Reviewers: Bruno Cadonna <bruno@confluent.io>, Bill Bejeck <bbejeck@gmail.com>
Also addresses KAFKA-8821
Note that we still have to fall back to using pattern subscription if the user has added any regex-based source nodes to the topology. Includes some minor cleanup on the side
Reviewers: Bill Bejeck <bbejeck@gmail.com>
Previously the idempotent producer and transactional producer use separate logic when initializing the producerId. This patch consolidates the two paths. We also do some cleanup in `TransactionManagerTest` to eliminate brittle expectations on `Sender`.
Reviewers: Bob Barrett <bob.barrett@confluent.io>, Viktor Somogyi <viktorsomogyi@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
This patch adds a new API to the producer to implement transactional offset commit fencing through the group coordinator as proposed in KIP-447. This PR mainly changes on the Producer end for compatible paths to old `sendOffsetsToTxn(offsets, groupId)` vs new `sendOffsetsToTxn(offsets, groupMetadata)`.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>, Jason Gustafson <jason@confluent.io>
This commit makes `DistributedHerder` log that some error has happened during task reconfiguration only when it actually has happened.
Author: Ivan Yurchenko <ivan0yurchenko@gmail.com>
Reviewer: Randall Hauch <rhauch@gmail.com>
The previous version of MockConsumer does not allow the clients to test consecutive calls to poll while consuming only from a partial set of partitions due to the fact that it clears all the records after each call. This change makes MockConsumer clearing the records only for the partitions that are not paused (whose records are actually returned by the poll). The remaining paused partitions will retain the records.
Unit test added accordingly.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Although APIs section in Kafka documentation lists 5 core APIs (https://kafka.apache.org/documentation/#api), introduction page in Kafka documentation lists 4 of them. I've added the missing list element to fix this incoherence.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Since the leader epoch was not maintained in the fetch session cache, no validation would be done except for the initial (full) fetch request. This patch adds the leader epoch to the session cache and addresses the testing gaps.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Colin Patrick McCabe <cmccabe@apache.org>
The `KafkaController::replicasAreValid` method currently returns a
boolean indicating if replicas are valid or not. But the failure
condition loses any context on why replicas are not valid. This change
updates the metod to return the error conition if validation fails. This
allows caller to report the error to the client.
The change also renames the `replicasAreValid` method to
`validateReplicas` to reflect updated semantics.
Reviewers: Sean Li <seanli-rallyhealth@users.noreply.github.com>, Jason Gustafson <jason@confluent.io>
The producer's BufferPool may block allocations if its memory limit has hit capacity. If the producer is closed, it's possible for the allocation waiters to wait for max.block.ms if progress cannot be made, even when force-closed (immediate), which can cause indefinite blocking if max.block.ms is particularly high.
This patch fixes the problem by adding a `close()` method to `BufferPool`, which wakes up any waiters that have pending allocations and throws an exception.
Reviewers: Jason Gustafson <jason@confluent.io>
Fixed an intermittent failure in the test that was caused by a mocked instance not handling expandIsr. Also updated the test to use the offset from the batch to trigger expandIsr more often. With this change, the test may try to acquire write lock while appends are blocked on read lock, so separated out the test using write lock to make it safer.
Reviewers: Jason Gustafson <jason@confluent.io>
When the scheduled refreshTopicPartitions runs, check existing topics in
both source and target clusters in order to compute topic partitions to
be created on target.
If a temporary failure to create the target topic is encountered (e.g.
insufficient number of brokers), on the next refresh the target topic
creation will be re-attempted.
Co-authored-by: Edoardo Comar <ecomar@uk.ibm.com>
Co-authored-by: Mickael Maison <mickael.maison@uk.ibm.com>
Reviewers: Ryanne Dolan <ryannedolan@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>
Removes references to the old scala Acl classes from kafka.security.auth (Acl, Operation, ResourceType, Resource etc.) and replaces these with the Java API. Only the old SimpleAclAuthorizer, AuthorizerWrapper used to wrap legacy authorizer instances and tests using SimpleAclAuthorizer continue to use the old API. Deprecates the old scala API.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>