This is one of the steps required for kafka to compile with Java 21.
For each case, one of the following fixes were applied:
1. Suppress warning if fixing would potentially result in an incompatible change (for public classes)
2. Add final to one or more methods so that the escape is not possible
3. Replace method calls with direct field access.
In addition, we also fix a couple of compiler warnings related to deprecated references in the `core` module.
See the following for more details regarding the new lint warning:
https://www.oracle.com/java/technologies/javase/21-relnote-issues.html#JDK-8015831
Reviewers: Divij Vaidya <diviv@amazon.com>, Satish Duggana <satishd@apache.org>, Chris Egerton <chrise@aiven.io>
This PR is part of #13247
It contains changes to rewrite single test in java.
Intention is reduce changes in parent PR.
Reviewers: Luke Chen <showuon@gmail.com>, Taras Ledkov <tledkov@apache.org>
TopicMetadataRequestManager is responsible for sending topic metadata requests. The manager manages API requests and build the request accordingly. All topic metadata requests are chained, if requesting the same topic, to avoid sending requests repeatedly.
Co-authored-by: Lianet Magrans <lmagrans@confluent.io>
Co-authored-by: Kirk True <kirk@kirktrue.pro>
Reviewers: Kirk True <kirk@kirktrue.pro>, Lianet Magrans <lianetmr@gmail.com>, Jun Rao <junrao@gmail.com>
Endpoint information provided by KafkaConfig can be incomplete in two ways. One is that endpoints
using ephemeral ports will show up as using port 0. Another is that endpoints binding to 0.0.0.0
will show up with a null or blank hostname. Because we were not accounting for this in controller
registration, it was leading to a null pointer dereference when we tried to register a controller
using an endpoint defined as PLAINTEXT://:9092.
This PR adds a ListenerInfo class which can fix both of the causes of incomplete endpoint
information. It also handles serialization to and from various RPC and record formats.
This allows us to remove a lot of boilerplate code and standardize the handling of listeners
between BrokerServer and ControllerServer.
Reviewers: David Arthur <mumrah@gmail.com>
In this PR, we noticed failed tests caused by the verification of log start offset and local log start offset. After investigation: #14347 (comment)
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14347/13/#showFailuresLink
After investigation, I found it's because the scala 2.12 cannot recognize the override method of maybeWaitForAtLeastOneSegmentUpload since it's using varargs in scala. I think there must be some bugs/gaps between java/scala that causes these issue. We can fix it by not using varargs, instead, using the Seq.
This PR adds back the log start offset and local log start offset verification, and make sure all tests passed.
Reviewers: Satish Duggana <satishd@apache.org>, Divij Vaidya <diviv@amazon.com>, Kamal Chandraprakash <kamal.chandraprakash@gmail.com>
kafka-acls cli prints the following error message on invalid operation:
$ kafka-acls --bootstrap-server xxx:9095 --remove --allow-principal "User:abc" --allow-host * --operation DESCRIBE_CONFIGS --topic xyz
ResourceType TOPIC only supports operations WRITE,DESCRIBE,ALL,READ,CREATE,ALTER,DELETE,DESCRIBE_CONFIGS,ALTER_CONFIGS
The following command actually works:
$ kafka-acls --bootstrap-server xxx:9095 --remove --allow-principal "User:abc" --allow-host * --operation DescribeConfigs --topic xyz
This PR fixes the invalid formating of operations in the error message and adds a space after the comma.
Resolves cache misses in checkstyle tasks due to absolute paths in configProperties.
Sets configDirectory extension property, which is made available by the checkstyle plugin as ${config_loc} in the checkstyle xml files, as shown in the Checkstyle Gradle docs. The absolute paths set in configProperties are then replaced by relative paths from configDirectory. Because the header and suppression config file names are static and only referenced once, these were removed from configProperties and the file names are given directly in checkstyle.xml
Reviewers: Divij Vaidya <diviv@amazon.com>
Support for using committed offsets to update fetch positions.
This PR includes:
* movingrefreshCommittedOffsets function out of the existing ConsumerCoordinator so it can be reused (no code changes)
* using the above refreshCommittedOffsets for updating fetch positions if the consumer has enabled the Kafka-based offsets management by defining a groupId
Reviewers: Jun Rao <junrao@gmail.com>
All block cache metrics are being multiplied by the total number of
column families. In a `RocksDBTimestampedStore`, we have 2 column
families (the default, and the timestamped values), which causes all
block cache metrics in these stores to become doubled.
The cause is that our metrics recorder uses `getAggregatedLongProperty`
to fetch block cache metrics. `getAggregatedLongProperty` queries the
property on each column family in the database, and sums the results.
Since we always configure all column families to share the same block
cache, that causes the same block cache to be queried multiple times for
its metrics, with the results added togehter, effectively multiplying
the real value by the total number of column families.
To fix this, we should simply use `getLongProperty`, which queries a
single column family (the default one). Since all column families share
the same block cache, querying just one of them will give us the correct
metrics for that shared block cache.
Note: the same block cache is shared among all column families of a store
irrespective of whether the user has configured a shared block cache
across multiple stores.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Bruno Cadonna <cadonna@apache.org>
This continues the work of providing the groundwork for the fetch
refactoring work by introducing some new classes and refactoring the
existing code to use the new classes where applicable.
Changes:
* Minor clean up of the events classes to make data immutable,
private, and implement toString().
* Added IdempotentCloser which prevents a resource from being closed
more than once. It's general enough that it could be used elsewhere
in the project, but it's limited to the consumer internals for now.
* Split core Fetcher code into classes to buffer raw results
(FetchBuffer) and to collect raw results into ConsumerRecords
(FetchCollector). These can be tested and changed in isolation from
the core fetcher logic.
* Added NodeStatusDetector which abstracts methods from
ConsumerNetworkClient so that it and NetworkClientDelegate can be
used in AbstractFetch via the interface instead of using
ConsumerNetworkClient directly.
Reviewers: Jun Rao <junrao@gmail.com>
This contribution extends the TrustManager created by the DefaultSSLEngineFactory class with code that checks for any invalid certificate whether it is just expired but valid otherwise. If this is the case, it extracts the common name and logs it. Apart from that, the trust manager will behave exactly as the default one.
Extensive unit tests are included in this pull request for ensuring that the modified trust manager will behave exactly as the default one, except for logging expired certificates common name. The test code generates several certificate chains with valid, invalid and expired end certificates, root CAs and even intermediate CAs.
This contribution is my original work and I license the work to the project under the project's open source license.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
When using kafka consumer in apache pinot , we did see couple of WARN as we are trying to create kafka consumer class with the same name . We currently have to use a added suffix to create a new mBean as each new kafka consumer in same process creates a mBean . Adding support here to skip creation of mBean if its already existing
Reviewers: Satish Duggana <satishd@apache.org>, Luke Chen <showuon@gmail.com>
This test extends the existing TransactionsTest. It configures the broker and topic with tiered storage and expects at-least one log segment to be uploaded to the remote storage.
Reviewers: Luke Chen <showuon@gmail.com>, Satish Duggana <satishd@apache.org>, Divij Vaidya <diviv@amazon.com>
KIP-890 Part 1 tries to address hanging transactions on old clients. Thus, the produce version can not be bumped and no new errors can be added. Before we used the java client's notion of retriable and abortable errors -- retriable errors are defined as such by extending the retriable error class, fatal errors are defined explicitly, and abortable errors are the remaining. However, many other clients treat non specified errors as fatal and that means many retriable errors kill the application.
Stuck between having specific errors for Java clients that are handled correctly (ie we retry) or specific fatal errors for cases that should not be fatal, we opted for a middle ground of non-specific error, but a message in the response to specify.
Converting some of the coordinator error codes to NOT_ENOUGH_REPLICAS which is a known produce response.
Also correctly add the old errors to the produce response. (We were not doing this correctly before)
Added tests for the new errors and messages.
Reviewers: Jason Gustafson <jason@confluent.io>, David Jacot <djacot@confluent.io>
Implementation of the required functionality for resetting and validating positions in the new async consumer.
This PR includes:
1. New async application events ResetPositionsApplicationEvent and ValidatePositionsApplicationEvent, both handled by the same OfffsetsRequestManager.
2. Integration of the reset/validate functionality in the new async consumer, to update fetch positions using the partitions offsets.
3. Minor refactoring to extract functionality that is reused from both consumer implementations (moving logic without changes from OffsetFetcher into OffsetFetchUtils, and from OffsetsForLeaderEpochClient into OffsetsForLeaderEpochUtils)
Reviewers: Philip Nee <pnee@confluent.io>, Kirk True <kirk@mustardgrain.com>, Jun Rao<junrao@gmail.com>
Initial definition of the core components for maintaining group membership on the client following the new consumer group protocol.
This PR includes:
- Membership management for keeping member state and assignment, based on the heartbeat responses received.
- Assignor selection logic to support server side assignors.
This only includes the basic initial states and transitions, to be extended as we implement the protocol.
This is intended to be used from the heartbeat and assignment requests manager that actually build and process the heartbeat and assignment related requests.
Reviewers: Philip Nee <pnee@confluent.io>, Kirk True <ktrue@confluent.io>, David Jacot <djacot@confluent.io>
Reduce default remote.log.metadata.initialization.retry.interval.ms value to 100ms.
Reviewers: Satish Duggana <satishd@apache.org>, Kamal Chandraprakash<kamal.chandraprakash@gmail.com>
In "ZooKeeper to KRaft Migration" documentation, we are still reporting 3.4 as metadata version. Reworking that phrase to make it more clear and avoid the need to update it in the future.
Signed-off-by: Federico Valeri <fedevaleri@gmail.com>
Reviewers: Luke Chen <showuon@gmail.com>
- Updated the contract for RSM's fetchIndex to throw a ResourceNotFoundException instead of returning an empty InputStream when it does not have a TransactionIndex.
- Updated the LocalTieredStorage implementation to adhere to the new contract.
- Added Unit Tests for the change.
Reviewers: Satish Duggana <satishd@apache.org>, Luke Chen <showuon@gmail.com>, Divij Vaidya <diviv@amazon.com>, Christo Lolov <lolovc@amazon.com>, Kamal Chandraprakash<kamal.chandraprakash@gmail.com>
This change is about the current leader updating the log-start-offset before the segments are deleted from remote storage. This will do a best-effort mechanism for followers to receive log-start-offset from the leader and they can update their log-start-offset before it becomes a leader.
Reviewers: Kamal Chandraprakash<kamal.chandraprakash@gmail.com>, Divij Vaidya <diviv@amazon.com>, Luke Chen <showuon@gmail.com>, Satish Duggana <satishd@apache.org>
This patch adds integration tests for the OffsetCommit API and the OffsetFetch API. The tests runs against the old and the new group coordinator and with the new and the old consumer rebalance protocol.
Reviewers: Ritika Reddy <rreddy@confluent.io>, Justine Olshan <jolshan@confluent.io>
Avoid busy waiting for processable tasks. We need to be a bit careful here to not have the task executors to sleep when work is available. We have to make sure to signal on the condition variable any time a task becomes "processable". Here are some situations where a task becomes processable:
- Task is unassigned from another TaskExecutor.
- Task state is changed (should only happen inside when a task is locked inside the polling phase).
- When tasks are unlocked.
- When tasks are added.
- New records available.
- A task is resumed.
So in summary, we
- We should probably lock tasks when they are paused and unlock them when they are resumed. We should also wake the task executors after every polling phase. This belongs to the StreamThread integration work (separate PR). We add DefaultTaskManager.signalProcessableTasks for this.
- We need to awake the task executors in DefaultTaskManager.unassignTask, DefaultTaskManager.unlockTasks and DefaultTaskManager.add.
Reviewers: Walker Carlson <wcarlson@confluent.io>, Bruno Cadonna <cadonna@apache.org>
This PR moves GetOffsetShell from core module to tools module with rewriting from Scala to Java.
Reviewers: Federico Valeri fedevaleri@gmail.com, Ziming Deng dengziming1993@gmail.com, Mickael Maison mimaison@apache.org.
- RSM and RLMM classpath can be empty since it's optional so removed the non-empty string validator
- Fix getting the `localTieredStorage` by brokerId after stopping a broker.
Reviewers: Christo Lolov <lolovc@amazon.com>, Luke Chen <showuon@gmail.com>, Satish Duggana <satishd@apache.org>
This patch allows broker heartbeat events to be completed while a metadata transaction is in-flight.
More generally, this patch allows any RUNS_IN_PREMIGRATION event to complete while the controller
is in pre-migration mode even if the migration transaction is in-flight.
We had a problem with broker heartbeats timing out because they could not be completed while a large
ZK migration transaction was in-flight. This resulted in the controller fencing all the ZK brokers which
has many undesirable downstream effects.
Reviewers: Akhilesh Chaganti <akhileshchg@users.noreply.github.com>, Colin Patrick McCabe <cmccabe@apache.org>
Log at warning level when remote fetch execution is rejected. This can be potentially useful to troubleshoot UNKNOWN_SERVER_ERROR responses for remote fetch requests.
Reviewers: Luke Chen <showuon@gmail.com>, Christo Lolov <lolovc@amazon.com>
In the Windows OS atomic move are not allowed if the file has another open handle. E.g
__cluster_metadata-0\quorum-state: The process cannot access the file because it is being used by another process
at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:92)
at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:403)
at java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293)
at java.base/java.nio.file.Files.move(Files.java:1430)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:949)
at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:932)
at org.apache.kafka.raft.FileBasedStateStore.writeElectionStateToFile(FileBasedStateStore.java:152)
This is fixed by first closing the temporary quorum-state file before attempting to move it.
Reviewers: Colin Patrick McCabe <cmccabe@apache.org>
Co-Authored-By: Renaldo Baur Filho <renaldobf@gmail.com>