Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and message.format.version = 0.9
Plus couple of variations of these tests using old/new consumer and no compression / snappy compression.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#980 from apovzner/kafka-3201-02
Patch by fpj and benstopford.
Author: flavio junqueira <fpj@apache.org>
Author: Flavio Junqueira <fpj@apache.org>
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Ben Stopford <benstopford@gmail.com>, Geoff Anderson <geoff@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#683 from fpj/KAFKA-2979
The core of this PR is to ensure we evaluate enabling security in a running cluster where we have different broker and client protocols.
Also in this PR are some improvements to the validation process in produce_consume_validate.py which make it easier to work out where missing messages have been lost:
- Fail fast if producer or consumer stop running.
- If messages go missing, check in the data files to see if the cause was data loss or the consumer missing messages.
- Make it possible for the ConsoleConsumer to log both what it consumed and when it consumed it (and enable this feature in produce_consume_validate tests)
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Gwen Shapira, Geoff Anderson
Closes#667 from benstopford/security-rolling_upgrade-additions
Split kafka logging into two levels - DEBUG and INFO, and do not collect DEBUG by default.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#657 from granders/KAFKA-2927-reduce-log-footprint
Partition re-assignment tests with and without broker failure.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>, Geoff Anderson <geoff@confluent.io>
Closes#655 from apovzner/kafka_2896
Tests rolling upgrade from PLAINTEXT to SSL
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Geoff Anderson, Ismael Juma
Closes#496 from benstopford/security-upgrade-test
Run tests with SSL, SASL_PLAINTEXT and SASL_SSL. Same security protocol is used for source and target Kafka.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Andreson, Ben Stopford
Closes#559 from rajinisivaram/KAFKA-2643
Restores control over log level in system test service class KafkaService.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ismael Juma, Ewen Cheslack-Postava
Closes#538 from granders/KAFKA-2820-systest-log-level
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Ben Stopford, Geoff Anderson, Guozhang Wang
Closes#432 from ewencp/kafka-2752-copycat-clean-bounce-test
This PR adds failover to simple end to end mirror maker test
Marked as WIP for 2 reasons:
- We may want to add a couple more test cases where kafka is being used to store offsets
- There appears to be a test failure in the hard failover case
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava
Closes#427 from granders/KAFKA-2258-mirrormaker-test
Run sanity check, replication tests and benchmarks with SASL/Kerberos using MiniKdc.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Anderson <geoff@confluent.io>, Jun Rao <junrao@gmail.com>
Closes#358 from rajinisivaram/KAFKA-2644
ewencp gwenshap
This needs some refactoring to avoid the duplicated code between replication test and upgrade test, but in shape for initial feedback.
I'm interested in feedback on the added `KafkaConfig` class and `kafka_props` file. This addition makes it:
- easier to attach different configs to different nodes (e.g. during broker upgrade process)
- easier to reason about the configuration of a particular node
Notes:
- in the default values in the KafkaConfig class, I removed many properties which were in kafka.properties before. This is because most of those properties were set to what is already the default value.
- when running non-trunk VerifiableProducer, I append the trunk tools jar to the classpath, and run it with the non-trunk kafka-run-class.sh script
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Dong Lin, Ewen Cheslack-Postava
Closes#229 from granders/KAFKA-1888-upgrade-test