Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and message.format.version = 0.9
Plus couple of variations of these tests using old/new consumer and no compression / snappy compression.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#980 from apovzner/kafka-3201-02
Added CompressionTest that tests 4 producers, each using a different compression type and one not using compression.
Enabled VerifiableProducer to run producers with different compression types (passed in the constructor). This includes enabling each producer to output unique values, so that the verification process in ProduceConsumeValidateTest is correct (counts acks from all producers).
Also a fix for console consumer to raise an exception if it sees the incorrect consumer output (before we swallowed an exception, so was hard to debug the issue).
Author: Anna Povzner <anna@confluent.io>
Reviewers: Geoff Anderson, Jason Gustafson
Closes#958 from apovzner/kafka-3214
The MessageFormatter being used was only introduced as of 0.9.0.0. The Kafka
version in some tests is changed dynamically, sometimes from trunk back to an
earlier version, so this option must be set based on the version used when the
service is started, not when it is created.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Geoff Anderson, Ismael Juma, Grant Henke
Closes#770 from ewencp/kafka-3080-system-test-console-consumer-version-failure
Note that KAFKA-3077 will be required to run these tests.
Author: Ashish Singh <asingh@cloudera.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#747 from SinghAsDev/KAFKA-3078
Patch by fpj and benstopford.
Author: flavio junqueira <fpj@apache.org>
Author: Flavio Junqueira <fpj@apache.org>
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Ben Stopford <benstopford@gmail.com>, Geoff Anderson <geoff@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#683 from fpj/KAFKA-2979
The core of this PR is to ensure we evaluate enabling security in a running cluster where we have different broker and client protocols.
Also in this PR are some improvements to the validation process in produce_consume_validate.py which make it easier to work out where missing messages have been lost:
- Fail fast if producer or consumer stop running.
- If messages go missing, check in the data files to see if the cause was data loss or the consumer missing messages.
- Make it possible for the ConsoleConsumer to log both what it consumed and when it consumed it (and enable this feature in produce_consume_validate tests)
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Gwen Shapira, Geoff Anderson
Closes#667 from benstopford/security-rolling_upgrade-additions
Split kafka logging into two levels - DEBUG and INFO, and do not collect DEBUG by default.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#657 from granders/KAFKA-2927-reduce-log-footprint
Partition re-assignment tests with and without broker failure.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>, Geoff Anderson <geoff@confluent.io>
Closes#655 from apovzner/kafka_2896
I originally tried to solve the problem by using tempfile, and creating and using scp() utility method that created a random local temp file every time it was called. However, it required passing miniKdc object to SecurityConfig setup_node which looked very invasive, since many tests use this method. Here is the PR for that, which I think we will close: https://github.com/apache/kafka/pull/609
This change is the least invasive change to solve conflicts between multiple tests jobs.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Geoff Anderson
Closes#610 from apovzner/kafka_2851_01
Tests rolling upgrade from PLAINTEXT to SSL
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Geoff Anderson, Ismael Juma
Closes#496 from benstopford/security-upgrade-test
Run tests with SSL, SASL_PLAINTEXT and SASL_SSL. Same security protocol is used for source and target Kafka.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Andreson, Ben Stopford
Closes#559 from rajinisivaram/KAFKA-2643
Removed a config in tools-log4j.properties which prevented certain service classes from logging at TRACE level.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava, Gwen Shapira
Closes#556 from granders/KAFKA-2820-systest-tool-loglevel
This is a hack which works. Is there a better way?
Build (v2) of the replication_test.py running here: http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/185/
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Geoff Anderson, Gwen Shapira
Closes#520 from benstopford/fix-for-sasl-virtual-box
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ismael Juma, Guozhang Wang
Closes#537 from granders/KAFKA-2845-new-client-old-broker-compatibility
Restores control over log level in system test service class KafkaService.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ismael Juma, Ewen Cheslack-Postava
Closes#538 from granders/KAFKA-2820-systest-log-level
In system tests zookeeper service, it is overkill and space-intensive to collect zookeeper data logs by default. This minor patch turns off default collection.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#504 from granders/minor-zk-change-log-collect
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Ben Stopford, Geoff Anderson, Guozhang Wang
Closes#432 from ewencp/kafka-2752-copycat-clean-bounce-test
This PR adds failover to simple end to end mirror maker test
Marked as WIP for 2 reasons:
- We may want to add a couple more test cases where kafka is being used to store offsets
- There appears to be a test failure in the hard failover case
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava
Closes#427 from granders/KAFKA-2258-mirrormaker-test
Run sanity check, replication tests and benchmarks with SASL/Kerberos using MiniKdc.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Anderson <geoff@confluent.io>, Jun Rao <junrao@gmail.com>
Closes#358 from rajinisivaram/KAFKA-2644
Updated kafka-producer-perf-test.sh to use org.apache.kafka.clients.tools.ProducerPerformance.
Updated build.gradle to add kafka-tools-0.9.0.0-SNAPSHOT.jar to kafka/libs folder.
Author: Manikumar reddy O <manikumar.reddy@gmail.com>
Reviewers: Gwen Shapira, Ismael Juma
Closes#242 from omkreddy/KAFKA-2562
ewencp gwenshap
This needs some refactoring to avoid the duplicated code between replication test and upgrade test, but in shape for initial feedback.
I'm interested in feedback on the added `KafkaConfig` class and `kafka_props` file. This addition makes it:
- easier to attach different configs to different nodes (e.g. during broker upgrade process)
- easier to reason about the configuration of a particular node
Notes:
- in the default values in the KafkaConfig class, I removed many properties which were in kafka.properties before. This is because most of those properties were set to what is already the default value.
- when running non-trunk VerifiableProducer, I append the trunk tools jar to the classpath, and run it with the non-trunk kafka-run-class.sh script
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Dong Lin, Ewen Cheslack-Postava
Closes#229 from granders/KAFKA-1888-upgrade-test
This adds coordination between DistributedHerders using the generalized consumer
support, allowing automatic balancing of connectors and tasks across workers. A
few pieces that require interaction between workers (resolving config
inconsistencies, forwarding of configuration changes to the leader worker) are
incomplete because they require REST API support to implement properly.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Jason Gustafson, Gwen Shapira
Closes#321 from ewencp/kafka-2371-distributed-herder
Added --timeout-ms argument to ConsoleConsumer that works with both old and new consumer. Also modified ducktape ConsoleConsumer service to use this arg instead of consumer.timeout.ms config that works only with the old consumer.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Aditya Auradkar, Ismael Juma, Guozhang Wang
Closes#274 from rajinisivaram/KAFKA-2603
granders Can you take a look at this quota system test?
Author: Dong Lin <lindong28@gmail.com>
Reviewers: Geoff Anderson, Ewen Cheslack-Postava
Closes#275 from lindong28/KAFKA-2527
This also adds some other needed infrastructure for distributed Copycat, most
importantly the DistributedHerder, and refactors some code for handling
Kafka-backed logs into KafkaBasedLog since this is shared betweeen offset and
config storage.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Gwen Shapira, James Cheng
Closes#241 from ewencp/kafka-2372-copycat-distributed-config
Parametrize console consumer sanity test, replication tests and benchmarks tests to run with both PLAINTEXT and SSL.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Anderson, Ewen Cheslack-Postava, Guozhang Wang
Closes#271 from rajinisivaram/KAFKA-2581
ewencp
The changes here are smaller than they look - mostly refactoring/cleanup.
- ConsumerPerformanceService: added new_consumer flag, and exposed more command-line settings
- benchmark.py: refactored to use `parametrize` and `matrix` - this reduced some amount of repeated code
- benchmark.py: added consumer performance tests with new consumer (using `parametrize`)
- benchmark.py: added more detailed test descriptions
- performance.py: broke into separate files
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava, Jason Gustafson, Gwen Shapira
Closes#179 from granders/KAFKA-2489-benchmark-new-consumer
This patch makes it possible to publish kafkatest (system test package) to pypi and use it as a library in other projects by:
- including necessary static resources with the package
- renaming the version to conform w/PEP 440, since python packaging tools reject the current version name
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava, Gwen Shapira
Closes#173 from granders/minor-kafkatest-add-manifest