You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Tree:
7ea636c661
0.10.0
0.10.1
0.10.2
0.11.0
0.7
0.7.0
0.7.1
0.7.2
0.8
0.8.0-beta1-candidate1
0.8.1
0.8.2
0.9.0
1.0
1.1
2.0
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.x
3.0
3.1
3.2
3.3
3.4
3.5
3.6
KAFKA-14367-join-group
KAFKA-14496-fix-oauth-encoder
KAFKA-15311
add-assignor-log-generation
cmccabe_2023-04-11_improve_controller_logging
cmccabe_2023-05-10_cleanup
cmccabe_2023-06-21_some_minor_fixes
cmccabe_kip_919
hekai-study-v2.8
john-disable-12049
kafka-10867-improved-task-idling-nolog
kip-866-zk-migration-to-kraft
metashell
minor-alter-isr-scheduling
printer
repro-task-idling-problem
revert-13391-kafka-14561
temp-8436
trunk
txn1
0.10.0.0
0.10.0.0-rc1
0.10.0.0-rc2
0.10.0.0-rc3
0.10.0.0-rc4
0.10.0.0-rc5
0.10.0.0-rc6
0.10.0.1
0.10.0.1-rc0
0.10.0.1-rc1
0.10.0.1-rc2
0.10.1.0
0.10.1.0-rc0
0.10.1.0-rc1
0.10.1.0-rc2
0.10.1.0-rc3
0.10.1.1
0.10.1.1-rc0
0.10.1.1-rc1
0.10.2.0
0.10.2.0-KAFKA-5526
0.10.2.0-rc0
0.10.2.0-rc1
0.10.2.0-rc2
0.10.2.1
0.10.2.1-rc0
0.10.2.1-rc1
0.10.2.1-rc2
0.10.2.1-rc3
0.10.2.2
0.10.2.2-rc0
0.10.2.2-rc1
0.11.0.0
0.11.0.0-rc0
0.11.0.0-rc1
0.11.0.0-rc2
0.11.0.1
0.11.0.1-rc0
0.11.0.2
0.11.0.2-rc0
0.11.0.3-rc0
0.8.0
0.8.0-beta1
0.8.0-beta1-candidate1
0.8.1
0.8.1.0
0.8.1.1
0.8.2-beta
0.8.2.0
0.8.2.0-cp
0.8.2.1
0.8.2.2
0.9.0.0
0.9.0.1
1.0.0
1.0.0-rc0
1.0.0-rc1
1.0.0-rc2
1.0.0-rc3
1.0.0-rc4
1.0.1
1.0.1-rc0
1.0.1-rc1
1.0.1-rc2
1.0.2
1.0.2-rc0
1.0.2-rc1
1.1.0
1.1.0-rc0
1.1.0-rc1
1.1.0-rc2
1.1.0-rc3
1.1.0-rc4
1.1.1
1.1.1-rc0
1.1.1-rc1
1.1.1-rc2
1.1.1-rc3
2.0.0
2.0.0-rc0
2.0.0-rc1
2.0.0-rc2
2.0.0-rc3
2.0.1
2.0.1-rc0
2.1.0
2.1.0-rc0
2.1.0-rc1
2.1.1
2.1.1-rc0
2.1.1-rc1
2.1.1-rc2
2.2.0
2.2.0-rc0
2.2.0-rc1
2.2.0-rc2
2.2.1
2.2.1-rc0
2.2.1-rc1
2.2.2
2.2.2-rc1
2.2.2-rc2
2.3.0
2.3.0-rc1
2.3.0-rc2
2.3.0-rc3
2.3.1
2.3.1-rc0
2.3.1-rc1
2.3.1-rc2
2.4.0
2.4.0-rc0
2.4.0-rc1
2.4.0-rc2
2.4.0-rc3
2.4.0-rc4
2.4.1
2.4.1-rc0
2.5.0
2.5.0-rc0
2.5.0-rc1
2.5.0-rc2
2.5.0-rc3
2.5.1
2.5.1-rc0
2.6.0
2.6.0-rc0
2.6.0-rc1
2.6.0-rc2
2.6.1
2.6.1-rc0
2.6.1-rc1
2.6.1-rc2
2.6.1-rc3
2.6.2
2.6.2-rc0
2.6.2-rc1
2.6.3
2.6.3-rc0
2.7.0
2.7.0-rc0
2.7.0-rc1
2.7.0-rc2
2.7.0-rc3
2.7.0-rc4
2.7.0-rc5
2.7.0-rc6
2.7.1
2.7.1-rc0
2.7.1-rc1
2.7.1-rc2
2.7.2
2.7.2-rc0
2.8.0
2.8.0-rc0
2.8.0-rc1
2.8.0-rc2
2.8.1
2.8.1-rc0
2.8.1-rc1
2.8.2
2.8.2-rc0
3.0.0
3.0.0-rc0
3.0.0-rc1
3.0.0-rc2
3.0.1
3.0.1-rc0
3.0.2
3.0.2-rc0
3.1.0
3.1.0-rc0
3.1.0-rc1
3.1.1
3.1.1-rc0
3.1.1-rc1
3.1.2
3.1.2-rc0
3.2.0
3.2.0-rc0
3.2.0-rc1
3.2.1
3.2.1-rc2
3.2.1-rc3
3.2.2
3.2.2-rc0
3.2.3
3.2.3-rc0
3.3.0
3.3.0-rc1
3.3.0-rc2
3.3.1
3.3.1-rc0
3.3.2
3.3.2-rc0
3.3.2-rc1
3.4.0
3.4.0-rc0
3.4.0-rc1
3.4.0-rc2
3.4.1
3.4.1-rc0
3.4.1-rc1
3.4.1-rc2
3.4.1-rc3
3.5.0
3.5.0-rc0
3.5.0-rc1
3.5.1
3.5.1-rc0
3.5.1-rc1
3.6.0
3.6.0-rc0
3.6.0-rc1
3.6.0-rc2
fetch
kafka-0.7.0-incubating-candidate-9
kafka-0.7.1-incubating-candidate-1
kafka-0.7.1-incubating-candidate-2
kafka-0.7.1-incubating-candidate-3
kafka-0.7.2-incubating-candidate-1
kafka-0.7.2-incubating-candidate-2
kafka-0.7.2-incubating-candidate-3
kafka-0.7.2-incubating-candidate-4
kafka-0.7.2-incubating-candidate-5
show
${ noResults }
src-kafka/clients
Jason Gustafson
ae0c6e58e5
The client caches metadata fetched from Metadata requests. Previously, each metadata response overwrote all of the metadata from the previous one, so we could rely on the expectation that the broker only returned the leaderId for a partition if it had connection information available. This behavior changed with KIP-320 since having the leader epoch allows the client to filter out partition metadata which is known to be stale. However, because of this, we can no longer rely on the request-level guarantee of leader availability. There is no mechanism similar to the leader epoch to track the staleness of broker metadata, so we still overwrite all of the broker metadata from each response, which means that the partition metadata can get out of sync with the broker metadata in the client's cache. Hence it is no longer safe to validate inside the `Cluster` constructor that each leader has an associated `Node` Fixing this issue was unfortunately not straightforward because the cache was built to maintain references to broker metadata through the `Node` object at the partition level. In order to keep the state consistent, each `Node` reference would need to be updated based on the new broker metadata. Instead of doing that, this patch changes the cache so that it is structured more closely with the Metadata response schema. Broker node information is maintained at the top level in a single collection and cached partition metadata only references the id of the broker. To accommodate this, we have removed `PartitionInfoAndEpoch` and we have altered `MetadataResponse.PartitionMetadata` to eliminate its `Node` references. Note that one of the side benefits of the refactor here is that we virtually eliminate one of the hotspots in Metadata request handling in `MetadataCache.getEndpoints` (which was renamed to `maybeFilterAliveReplicas`). The only reason this was expensive was because we had to build a new collection for the `Node` representations of each of the replica lists. This information was doomed to just get discarded on serialization, so the whole effort was wasteful. Now, we work with the lower level id lists and no copy of the replicas is needed (at least for all versions other than 0). Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>, Ismael Juma <ismael@juma.me.uk> |
5 years ago | |
---|---|---|
.. | ||
src | KAFKA-9261; Client should handle unavailable leader metadata (#7770) | 5 years ago |
.gitignore | KAFKA-4848: Fix retryWithBackoff deadlock issue | 8 years ago |