Fixed an intermittent failure in the test that was caused by a mocked instance not handling expandIsr. Also updated the test to use the offset from the batch to trigger expandIsr more often. With this change, the test may try to acquire write lock while appends are blocked on read lock, so separated out the test using write lock to make it safer.
Reviewers: Jason Gustafson <jason@confluent.io>
When the scheduled refreshTopicPartitions runs, check existing topics in
both source and target clusters in order to compute topic partitions to
be created on target.
If a temporary failure to create the target topic is encountered (e.g.
insufficient number of brokers), on the next refresh the target topic
creation will be re-attempted.
Co-authored-by: Edoardo Comar <ecomar@uk.ibm.com>
Co-authored-by: Mickael Maison <mickael.maison@uk.ibm.com>
Reviewers: Ryanne Dolan <ryannedolan@gmail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>
Removes references to the old scala Acl classes from kafka.security.auth (Acl, Operation, ResourceType, Resource etc.) and replaces these with the Java API. Only the old SimpleAclAuthorizer, AuthorizerWrapper used to wrap legacy authorizer instances and tests using SimpleAclAuthorizer continue to use the old API. Deprecates the old scala API.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Add a new method to KafkaStreams to return an estimate of the lags for
all partitions of all local stores.
Implements: KIP-535
Co-authored-by: Navinder Pal Singh Brar <navinder_brar@yahoo.com>
Reviewed-by: John Roesler <vvcephei@apache.org>
During a reassignment, it can happen that the current leader of a partition is demoted and removed from the replica set at the same time. In this case, we rely on the StopReplica request in order to stop replica fetchers and to clear the group coordinator cache. This patch adds similar logic to ensure that the transaction coordinator state cache also gets cleared.
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>
Add a new overload of KafkaStreams#store that allows users
to query standby and restoring stores in addition to active ones.
Closes: #7962
Implements: KIP-535
Co-authored-by: Navinder Pal Singh Brar <navinder_brar@yahoo.com>
Reviewed-by: John Roesler <vvcephei@apache.org>
Deprecate existing metadata query APIs in favor of new
ones that include standby hosts as well as partition
information.
Closes: #7960
Implements: KIP-535
Co-authored-by: Navinder Pal Singh Brar <navinder_brar@yahoo.com>
Reviewed-by: John Roesler <vvcephei@apache.org>
To be able to correctly fence zombie producer txn commit, we propose to add (member.id, group.instance.id, generation) into the transaction commit protocol to raise the same level of correctness guarantee as consumer commit.
Major changes involve:
1. Upgrade transaction commit protocol with (member.id, group.instance.id, generation). The client will fail if the broker is not supporting the new protocol.
2. Refactor group coordinator logic to handle new txn commit errors such as FENCED_INSTANCE_ID, UNKNOWN_MEMBER_ID and ILLEGAL_GENERATION. We loose the check on transaction commit when the member.id is set to empty. This is because the member.id check is an add-on safety for producer commit, and we also need to consider backward compatibility for old producer clients without member.id information. And if producer equips with group.instance.id, then it must provide a valid member.id (not empty definitely), the same as a consumer commit.
Reviewers: Jason Gustafson <jason@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
This fix makes the LogCleaner tolerant of gaps in the offset sequence. Previously, this could lead to endless loops of cleaning which required manual intervention.
Reviewers: Jun Rao <junrao@gmail.com>, David Arthur <mumrah@gmail.com>
Check for ISR updates using ISR read lock and acquire ISR write lock only if ISR needs to be updated. This avoids lock contention between request handler threads processing log appends on the leader holding the ISR read lock and request handler threads processing replica fetch requests that check/update ISR.
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Let consumer back-off and retry offset fetch when the specific offset topic has pending commits.
The major change lies in the broker side offset fetch logic, where a request configured with flag WaitTransaction to true will be required to back-off when some pending transactional commit is ongoing. This prevents any ongoing transaction being modified by third party, thus guaranteeing the correctness with input partition writer shuffling.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Jason Gustafson <jason@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Loop call Consumer.endOffsets Throw TimeoutException: Failed to get offsets by times in 30000ms after a leader change.
Reviewers: Steven Lu, Guozhang Wang <wangguoz@gmail.com>
Prior to this commit, broker's Selector used to read all requests available on the socket when the socket is ready for read. These are queued up as staged receives. This can result in excessive memory usage when socket read buffer size is large. We mute the channel and stop reading any more data until all the staged requests are processed. This behaviour is slightly inconsistent since for the initial read we drain the socket buffer, allowing it to get filled up again, but if data arrives slighly after the initial read, then we dont read from the socket buffer until pending requests are processed.
To avoid holding onto requests for longer than required, this commit removes staged receives and reads one request at a time even if more data is available in the socket buffer. This is especially useful for produce requests which may be large and may take long to process. Additional data read from the socket for SSL is limited to the pre-allocated intermediate SSL buffers.
Reviewers: Jun Rao <junrao@gmail.com>, Ismael Juma <ismael@juma.me.uk>
When `kafka-topics.sh` is used with the --zookeeper option, it prints a confirmation message. This patch adds the same message when --bootstrap-server is used.
Reviewers: Jason Gustafson <jason@confluent.io>
Changed `CreatePartitionMetadata` to not include partition assignment
information since this is not needed to generate a
`CreatePartitionResponse` message.
Reviewers: Vikas Singh <vikas@confluent.io>, Jason Gustafson <jason@confluent.io>
The method updatePartitionReplicaAssignment allows the user to directly
modify the replicas assigned to a partition without also modifying the
addingReplicas and removingReplicas fields. This is not a safe operation
for all inputs. Clients should instead use updatePartitionFullReplicaAssignment.
Reviewers: Vikas Singh <vikas@confluent.io>, Jason Gustafson <jason@confluent.io>
The test blocks requests threads while sending the ACL update requests and occasionally hits request timeout. Updated the test to tolerate timeouts and retry the request for that case. Added an additional check to verify that the requests threads are unblocked when the semaphore is released, ensuring that the timeout is not due to blocked threads.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
Do not initialize `JmxTool` by default when running console consumer. In order to support this, we remove `has_partitions_assigned` and its only usage in an assertion inside `ProduceConsumeValidateTest`, which did not seem to contribute much to the validation.
Reviewers: David Arthur <mumrah@gmail.com>, Jason Gustafson <jason@confluent.io>
Not wait until updateAssignmentMetadataIfNeeded returns true, but only call it once with 0 timeout. Also do not return empty if in rebalance.
Trim the pre-fetched records after long polling since assignment may have been changed.
Also need to update SubscriptionState to retain the state in assignFromSubscribed if it already exists (similar to assignFromUser), so that we do not need the transition of INITIALIZING to FETCHING.
Unit test: this actually took me the most time :)
Reviewers: John Roesler <john@confluent.io>, Bill Bejeck <bill@confluent.io>, Bruno Cadonna <bruno@confluent.io>, Sophie Blee-Goldman <sophie@confluent.io>, Jason Gustafson <jason@confluent.io>, Richard Yu <yohan.richard.yu@gmail.com>, dengziming <dengziming1993@gmail.com>
Existing producer state expiration uses timestamps from data records only and not from transaction markers. This can cause premature producer expiration when the coordinator times out a transaction because we drop the state from existing batches. This in turn can allow the coordinator epoch to revert to a previous value, which can lead to validation failures during log recovery. This patch fixes the problem by also leveraging the timestamp from transaction markers.
We also change the validation logic so that coordinator epoch is verified only for new marker appends. When replicating from the leader and when recovering the log, we only log a warning if we notice that the coordinator epoch has gone backwards. This allows recovery from previous occurrences of this bug.
Finally, this patch fixes one minor issue when loading producer state from the snapshot file. When the only record for a given producer is a control record, the "last offset" field will be set to -1 in the snapshot. We should check for this case when loading to be sure we recover the state consistently.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
It is possible for the user to call `commitTransaction` before a pending `AddPartitionsToTxn` call has returned. If the `AddPartitionsToTxn` call returns an abortable error, we need to cancel any pending batches in the accumulator and we need to fail the pending commit. The problem in this case is that `Sender.runOnce`, after failing the batches, enters `NetworkClient.poll` before it has a chance to cancel the commit. Since there are no pending requests at this time, this will block unnecessarily and prevent completion of the doomed commit. This patch fixes the problem by returning from `runOnce` if any batches have been aborted, which allows the commit to also fail without the delay.
Note that this was the cause of the delay executing `AuthorizerIntegrationTest.testTransactionalProducerTopicAuthorizationExceptionInCommit` compared to other tests in this class. After fixing the bug, the delay is gone and the test time is similar to other test cases in the class.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
With the old SimpleAclAuthorizer, we were handling delete filters that matched a single resource by looking up that resource directly, even if it wasn't in the cache. AclAuthorizerTest.testHighConcurrencyDeletionOfResourceAcls relies on this behaviour and fails intermittently when the cache is not up-to-date. This PR includes the resource from non-matching filters even if it is not in the cache to retain the old behaviour.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Also document the parameters of `ReplicaAssignment` in `ControllerContext`.
Reviewers: Colin Patrick McCabe <cmccabe@apache.org>, Jason Gustafson <jason@confluent.io>
This change updates the CreatePartitions request and response api objects
to use the generated protocol classes.
Reviewers: Jason Gustafson <jason@confluent.io>
Since `retry` expects a `by name` parameter, passing a nullary function
causes assertion failures to be ignored. I verified this by introducing
a failing assertion before/after the fix.
I checked other `retry` usages and they looked correct.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>
This PR fixes the regression introduced in 2.4 from 2 refactoring PRs:
#7249#7419
The bug was introduced by having a logical path leading numPartitionsCandidate to be 0, which is assigned to numPartitions and later being checked by setNumPartitions. In the subsequent check we will throw illegal argument if the numPartitions is 0.
This bug is both impacting new 2.4 application and upgrades to 2.4 in certain types of topology. The example in original JIRA was imported as a new integration test to guard against such regression. We also verify that without the bug fix application will still fail by running this integration test.
Reviewers: Guozhang Wang <wangguoz@gmail.com>