Mirror of Apache Kafka
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

226 lines
8.8 KiB

<?xml version="1.0"?>
<!DOCTYPE suppressions PUBLIC
"-//Puppy Crawl//DTD Suppressions 1.1//EN"
"http://www.puppycrawl.com/dtds/suppressions_1_1.dtd">
<suppressions>
<!-- Note that [/\\] must be used as the path separator for cross-platform support -->
<!-- Clients -->
<suppress checks="ClassFanOutComplexity"
files="(Fetcher|Sender|SenderTest|ConsumerCoordinator|KafkaConsumer|KafkaProducer|Utils|TransactionManagerTest|KafkaAdminClient|NetworkClient).java"/>
<suppress checks="ClassFanOutComplexity"
files="(SaslServerAuthenticator|SaslAuthenticatorTest).java"/>
<suppress checks="ClassFanOutComplexity"
files="Errors.java"/>
<suppress checks="ClassFanOutComplexity"
files="Utils.java"/>
<suppress checks="ClassFanOutComplexity"
files="AbstractRequest.java"/>
<suppress checks="ClassFanOutComplexity"
files="AbstractResponse.java"/>
<suppress checks="MethodLength"
files="KerberosLogin.java|RequestResponseTest.java|ConnectMetricsRegistry.java"/>
<suppress checks="ParameterNumber"
files="NetworkClient.java"/>
<suppress checks="ParameterNumber"
files="KafkaConsumer.java"/>
<suppress checks="ParameterNumber"
files="ConsumerCoordinator.java"/>
<suppress checks="ParameterNumber"
files="Fetcher.java"/>
<suppress checks="ParameterNumber"
files="Sender.java"/>
<suppress checks="ParameterNumber"
files="ConfigDef.java"/>
<suppress checks="ParameterNumber"
files="DefaultRecordBatch.java"/>
<suppress checks="ParameterNumber"
files="Sender.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="(KafkaConsumer|ConsumerCoordinator|Fetcher|KafkaProducer|AbstractRequest|AbstractResponse|TransactionManager|KafkaAdminClient).java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="(Errors|SaslAuthenticatorTest|AgentTest).java"/>
<suppress checks="BooleanExpressionComplexity"
files="(Utils|Topic|KafkaLZ4BlockOutputStream|AclData).java"/>
<suppress checks="CyclomaticComplexity"
files="(ConsumerCoordinator|Fetcher|Sender|KafkaProducer|BufferPool|ConfigDef|RecordAccumulator|KerberosLogin|AbstractRequest|AbstractResponse|Selector|SslFactory|SslTransportLayer|SaslClientAuthenticator|SaslClientCallbackHandler).java"/>
<suppress checks="JavaNCSS"
files="AbstractRequest.java|KerberosLogin.java|WorkerSinkTaskTest.java"/>
<suppress checks="NPathComplexity"
files="(BufferPool|MetricName|Node|ConfigDef|SslTransportLayer|MetadataResponse|KerberosLogin|Selector|Sender|Serdes|Agent|Values|PluginUtils).java"/>
<!-- clients tests -->
<suppress checks="ClassDataAbstractionCoupling"
files="(Sender|Fetcher|KafkaConsumer|Metrics|ConsumerCoordinator|RequestResponse|TransactionManager|KafkaAdminClient)Test.java"/>
<suppress checks="ClassFanOutComplexity"
files="(ConsumerCoordinator|KafkaConsumer|RequestResponse|Fetcher|KafkaAdminClient)Test.java"/>
<suppress checks="ClassFanOutComplexity"
files="MockAdminClient.java"/>
<suppress checks="JavaNCSS"
files="RequestResponseTest.java"/>
<!-- Connect -->
<suppress checks="ClassFanOutComplexity"
files="DistributedHerder(|Test).java"/>
<suppress checks="MethodLength"
files="(KafkaConfigBackingStore|RequestResponseTest|WorkerSinkTaskTest).java"/>
<suppress checks="ParameterNumber"
files="WorkerSourceTask.java"/>
<suppress checks="ParameterNumber"
files="WorkerCoordinator.java"/>
<suppress checks="ParameterNumber"
files="ConfigKeyInfo.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="(RestServer|AbstractHerder|DistributedHerder).java"/>
<suppress checks="BooleanExpressionComplexity"
files="JsonConverter.java"/>
<suppress checks="CyclomaticComplexity"
files="ConnectRecord.java"/>
<suppress checks="CyclomaticComplexity"
files="JsonConverter.java"/>
<suppress checks="CyclomaticComplexity"
files="FileStreamSourceTask.java"/>
<suppress checks="CyclomaticComplexity"
files="DistributedHerder.java"/>
<suppress checks="CyclomaticComplexity"
files="KafkaConfigBackingStore.java"/>
KAFKA-5142: Add Connect support for message headers (KIP-145) **[KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect) has been accepted, and this PR implements KIP-145 except without the SMTs.** Changed the Connect API and runtime to support message headers as described in [KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect). The new `Header` interface defines an immutable representation of a Kafka header (key-value pair) with support for the Connect value types and schemas. This interface provides methods for easily converting between many of the built-in primitive, structured, and logical data types. The new `Headers` interface defines an ordered collection of headers and is used to track all headers associated with a `ConnectRecord` (and thus `SourceRecord` and `SinkRecord`). This does allow multiple headers with the same key. The `Headers` contains methods for adding, removing, finding, and modifying headers. Convenience methods allow connectors and transforms to easily use and modify the headers for a record. A new `HeaderConverter` interface is also defined to enable the Connect runtime framework to be able to serialize and deserialize headers between the in-memory representation and Kafka’s byte[] representation. A new `SimpleHeaderConverter` implementation has been added, and this serializes to strings and deserializes by inferring the schemas (`Struct` header values are serialized without the schemas, so they can only be deserialized as `Map` instances without a schema.) The `StringConverter`, `JsonConverter`, and `ByteArrayConverter` have all been extended to also be `HeaderConverter` implementations. Each connector can be configured with a different header converter, although by default the `SimpleHeaderConverter` is used to serialize header values as strings without schemas. Unit and integration tests are added for `ConnectHeader` and `ConnectHeaders`, the two implementation classes for headers. Additional test methods are added for the methods added to the `Converter` implementations. Finally, the `ConnectRecord` object is already used heavily, so only limited tests need to be added while quite a few of the existing tests already cover the changes. Author: Randall Hauch <rhauch@gmail.com> Reviewers: Arjun Satish <arjun@confluent.io>, Ted Yu <yuzhihong@gmail.com>, Magesh Nandakumar <magesh.n.kumar@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io> Closes #4319 from rhauch/kafka-5142-b
7 years ago
<suppress checks="CyclomaticComplexity"
files="(Values|ConnectHeader|ConnectHeaders).java"/>
<suppress checks="JavaNCSS"
files="KafkaConfigBackingStore.java"/>
KAFKA-5142: Add Connect support for message headers (KIP-145) **[KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect) has been accepted, and this PR implements KIP-145 except without the SMTs.** Changed the Connect API and runtime to support message headers as described in [KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect). The new `Header` interface defines an immutable representation of a Kafka header (key-value pair) with support for the Connect value types and schemas. This interface provides methods for easily converting between many of the built-in primitive, structured, and logical data types. The new `Headers` interface defines an ordered collection of headers and is used to track all headers associated with a `ConnectRecord` (and thus `SourceRecord` and `SinkRecord`). This does allow multiple headers with the same key. The `Headers` contains methods for adding, removing, finding, and modifying headers. Convenience methods allow connectors and transforms to easily use and modify the headers for a record. A new `HeaderConverter` interface is also defined to enable the Connect runtime framework to be able to serialize and deserialize headers between the in-memory representation and Kafka’s byte[] representation. A new `SimpleHeaderConverter` implementation has been added, and this serializes to strings and deserializes by inferring the schemas (`Struct` header values are serialized without the schemas, so they can only be deserialized as `Map` instances without a schema.) The `StringConverter`, `JsonConverter`, and `ByteArrayConverter` have all been extended to also be `HeaderConverter` implementations. Each connector can be configured with a different header converter, although by default the `SimpleHeaderConverter` is used to serialize header values as strings without schemas. Unit and integration tests are added for `ConnectHeader` and `ConnectHeaders`, the two implementation classes for headers. Additional test methods are added for the methods added to the `Converter` implementations. Finally, the `ConnectRecord` object is already used heavily, so only limited tests need to be added while quite a few of the existing tests already cover the changes. Author: Randall Hauch <rhauch@gmail.com> Reviewers: Arjun Satish <arjun@confluent.io>, Ted Yu <yuzhihong@gmail.com>, Magesh Nandakumar <magesh.n.kumar@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io> Closes #4319 from rhauch/kafka-5142-b
7 years ago
<suppress checks="JavaNCSS"
files="Values.java"/>
<suppress checks="NPathComplexity"
files="ConnectRecord.java"/>
<suppress checks="NPathComplexity"
files="ConnectSchema.java"/>
<suppress checks="NPathComplexity"
files="FileStreamSourceTask.java"/>
<suppress checks="NPathComplexity"
files="JsonConverter.java"/>
<suppress checks="NPathComplexity"
files="DistributedHerder.java"/>
KAFKA-5142: Add Connect support for message headers (KIP-145) **[KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect) has been accepted, and this PR implements KIP-145 except without the SMTs.** Changed the Connect API and runtime to support message headers as described in [KIP-145](https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect). The new `Header` interface defines an immutable representation of a Kafka header (key-value pair) with support for the Connect value types and schemas. This interface provides methods for easily converting between many of the built-in primitive, structured, and logical data types. The new `Headers` interface defines an ordered collection of headers and is used to track all headers associated with a `ConnectRecord` (and thus `SourceRecord` and `SinkRecord`). This does allow multiple headers with the same key. The `Headers` contains methods for adding, removing, finding, and modifying headers. Convenience methods allow connectors and transforms to easily use and modify the headers for a record. A new `HeaderConverter` interface is also defined to enable the Connect runtime framework to be able to serialize and deserialize headers between the in-memory representation and Kafka’s byte[] representation. A new `SimpleHeaderConverter` implementation has been added, and this serializes to strings and deserializes by inferring the schemas (`Struct` header values are serialized without the schemas, so they can only be deserialized as `Map` instances without a schema.) The `StringConverter`, `JsonConverter`, and `ByteArrayConverter` have all been extended to also be `HeaderConverter` implementations. Each connector can be configured with a different header converter, although by default the `SimpleHeaderConverter` is used to serialize header values as strings without schemas. Unit and integration tests are added for `ConnectHeader` and `ConnectHeaders`, the two implementation classes for headers. Additional test methods are added for the methods added to the `Converter` implementations. Finally, the `ConnectRecord` object is already used heavily, so only limited tests need to be added while quite a few of the existing tests already cover the changes. Author: Randall Hauch <rhauch@gmail.com> Reviewers: Arjun Satish <arjun@confluent.io>, Ted Yu <yuzhihong@gmail.com>, Magesh Nandakumar <magesh.n.kumar@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io> Closes #4319 from rhauch/kafka-5142-b
7 years ago
<suppress checks="NPathComplexity"
files="ConnectHeaders.java"/>
<suppress checks="MethodLength"
files="Values.java"/>
<!-- connect tests-->
<suppress checks="ClassDataAbstractionCoupling"
files="(DistributedHerder|KafkaBasedLog)Test.java"/>
<!-- Streams -->
<suppress checks="ClassFanOutComplexity"
files="(TopologyBuilder|KafkaStreams|KStreamImpl|KTableImpl|StreamThread|StreamTask).java"/>
<suppress checks="MethodLength"
files="StreamPartitionAssignor.java"/>
<suppress checks="ParameterNumber"
files="StreamTask.java"/>
<suppress checks="ParameterNumber"
files="RocksDBWindowStoreSupplier.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="(TopologyBuilder|KStreamImpl|StreamPartitionAssignor|KafkaStreams|KTableImpl).java"/>
<suppress checks="CyclomaticComplexity"
files="TopologyBuilder.java"/>
<suppress checks="CyclomaticComplexity"
files="StreamPartitionAssignor.java"/>
<suppress checks="CyclomaticComplexity"
files="StreamThread.java"/>
<suppress checks="JavaNCSS"
files="StreamPartitionAssignor.java"/>
<suppress checks="NPathComplexity"
files="ProcessorStateManager.java"/>
<suppress checks="NPathComplexity"
files="StreamPartitionAssignor.java"/>
<suppress checks="NPathComplexity"
files="StreamThread.java"/>
<!-- Streams tests -->
<suppress checks="ClassFanOutComplexity"
files="(StreamThreadTest|StreamTaskTest|ProcessorTopologyTestDriver).java"/>
<suppress checks="MethodLength"
files="KStreamKTableJoinIntegrationTest.java"/>
<suppress checks="MethodLength"
files="KStreamKStreamJoinTest.java"/>
<suppress checks="MethodLength"
files="KStreamWindowAggregateTest.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files=".*[/\\]streams[/\\].*test[/\\].*.java"/>
<suppress checks="BooleanExpressionComplexity"
files="SmokeTestDriver.java"/>
<suppress checks="CyclomaticComplexity"
files="KStreamKStreamJoinTest.java"/>
<suppress checks="CyclomaticComplexity"
files="SmokeTestDriver.java"/>
<suppress checks="JavaNCSS"
files="KStreamKStreamJoinTest.java"/>
<suppress checks="JavaNCSS"
files="SmokeTestDriver.java"/>
<suppress checks="NPathComplexity"
files="KStreamKStreamJoinTest.java"/>
<suppress checks="NPathComplexity"
files="KStreamKStreamLeftJoinTest.java"/>
<!-- Streams Test-Utils -->
<suppress checks="ClassFanOutComplexity"
files="TopologyTestDriver.java"/>
<suppress checks="ClassDataAbstractionCoupling"
files="TopologyTestDriver.java"/>
<!-- Tools -->
<suppress checks="ClassDataAbstractionCoupling"
files="VerifiableConsumer.java"/>
<suppress checks="CyclomaticComplexity"
files="(StreamsResetter|ProducerPerformance|Agent).java"/>
<suppress checks="NPathComplexity"
files="(ProducerPerformance|StreamsResetter|Agent).java"/>
<suppress checks="ImportControl"
files="SignalLogger.java"/>
<suppress checks="IllegalImport"
files="SignalLogger.java"/>
<!-- Log4J-Appender -->
<suppress checks="CyclomaticComplexity"
files="KafkaLog4jAppender.java"/>
<suppress checks="NPathComplexity"
files="KafkaLog4jAppender.java"/>
KIP-101: Alter Replication Protocol to use Leader Epoch rather than High Watermark for Truncation This PR replaces https://github.com/apache/kafka/pull/2743 (just raising from Confluent repo) This PR describes the addition of Partition Level Leader Epochs to messages in Kafka as a mechanism for fixing some known issues in the replication protocol. Full details can be found here: [KIP-101 Reference](https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+use+Leader+Epoch+rather+than+High+Watermark+for+Truncation) *The key elements are*: - Epochs are stamped on messages as they enter the leader. - Epochs are tracked in both leader and follower in a new checkpoint file. - A new API allows followers to retrieve the leader's latest offset for a particular epoch. - The logic for truncating the log, when a replica becomes a follower, has been moved from Partition into the ReplicaFetcherThread - When partitions are added to the ReplicaFetcherThread they are added in an initialising state. Initialising partitions request leader epochs and then truncate their logs appropriately. This test provides a good overview of the workflow `EpochDrivenReplicationProtocolAcceptanceTest.shouldFollowLeaderEpochBasicWorkflow()` The corrupted log use case is covered by the test `EpochDrivenReplicationProtocolAcceptanceTest.offsetsShouldNotGoBackwards()` Remaining work: There is a do list here: https://docs.google.com/document/d/1edmMo70MfHEZH9x38OQfTWsHr7UGTvg-NOxeFhOeRew/edit?usp=sharing Author: Ben Stopford <benstopford@gmail.com> Author: Jun Rao <junrao@gmail.com> Reviewers: Ismael Juma <ismael@juma.me.uk>, Jun Rao <junrao@gmail.com> Closes #2808 from benstopford/kip-101-v2
8 years ago
<suppress checks="JavaNCSS"
files="RequestResponseTest.java"/>
</suppressions>