The phase_two security upgrade test verifies upgrading inter-broker and client protocols to the same value as well as different values. The second case currently changes inter-broker protocol without first enabling the protocol, disrupting produce/consume until the whole cluster is updated. This commit changes the test to be a non-disruptive upgrade test that enables protocols first (simulating phase one of upgrade).
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Apurva Mehta <apurva.1618@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2589 from rajinisivaram/KAFKA-4779
Kafka brokers have a config called "offsets.topic.replication.factor" that specify the replication factor for the "__consumer_offsets" topic. The problem is that this config isn't being enforced. If an attempt to create the internal topic is made when there are fewer brokers than "offsets.topic.replication.factor", the topic ends up getting created anyway with the current number of live brokers. The current behavior is pretty surprising when you have clients or tooling running as the cluster is getting setup. Even if your cluster ends up being huge, you'll find out much later that __consumer_offsets was setup with no replication.
The cluster not meeting the "offsets.topic.replication.factor" requirement on the internal topic is another way of saying the cluster isn't fully setup yet.
The right behavior should be for "offsets.topic.replication.factor" to be enforced. Topic creation of the internal topic should fail with GROUP_COORDINATOR_NOT_AVAILABLE until the "offsets.topic.replication.factor" requirement is met. This closely resembles the behavior of regular topic creation when the requested replication factor exceeds the current size of the cluster, as the request fails with error INVALID_REPLICATION_FACTOR.
Author: Onur Karaman <okaraman@linkedin.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2177 from onurkaraman/KAFKA-3959
Runs sanity test and one replication test using SASL/SCRAM.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#2355 from rajinisivaram/KAFKA-4590
In reality, we’ll only test older brokers after KAFKA-4462 is fully implemented.
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Apurva Mehta <apurva.1618@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2263 from cmccabe/KAFKA-4508
Updates to take advantage of soon-to-be-released ducktape features.
Author: Geoff Anderson <geoff@confluent.io>
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1834 from granders/systest-parallel-friendly
Update system test method signatures and method calls to use the new consumer by default.
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#2060 from vahidhashemian/KAFKA-4211
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#2034 from benstopford/throttling-system-test-kafka-changes
In this patch, we test `kafka-reassign-partitions` when throttling is active.
This patch also fixes the following:
1. KafkaService.verify_reassign_partitions did not check whether
partition reassignment actually completed successfully (KAFKA-4204).
This patch works around those shortcomings so that we get the right
signal from this method.
2. ProduceConsumeValidateTest.annotate_missing_messages would call
`pop' on the list of missing messages, causing downstream methods to get
incomplete data. We fix that in this patch as well.
Author: Apurva Mehta <apurva.1618@gmail.com>
Reviewers: Geoff Anderson <geoff@confluent.io>, Ben Stopford <benstopford@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#1904 from apurvam/throttling-tests
Fix existing client-id quota test which currently don't configure quota overrides correctly. Add new tests for user and (user, client-id) quota overrides and default quotas.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#1860 from rajinisivaram/KAFKA-4055
Add an optional configuration for the SecureRandom PRNG implementation, with the default behavior being the same (use the default implementation in the JDK/JRE).
Author: Todd Palino <Todd Palino>
Reviewers: Grant Henke <granthenke@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy@gmail.com>, Jiangjie Qin <becket.qin@gmail.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
Closes#1747 from toddpalino/trunk
ijuma
As discussed in https://github.com/apache/kafka/pull/1645, this patch removes an extraneous line from several __init__.py files, and a few others as well
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#1659 from granders/minor-cleanup-init-files
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Geoff Anderson <geoff@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#1365 from hachikuji/KAFKA-3694
This patch adds logic for the following:
- remove hard-coded paths to various scripts and jars in kafkatest service classes
- provide a mechanism for overriding path resolution logic with a "pluggable" path resolver class
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1245 from granders/configurable-install-path
Run a sanity test with SASL/PLAIN and a couple of replication tests with SASL/PLAIN and multiple mechanisms.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1282 from rajinisivaram/KAFKA-2693
becketqin apovzner please have a look. becketqin the test fails when the producer and consumer are 0.9.x and the message format changes on the fly.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Ewen Cheslack-Postava, Ismael Juma, Gwen Shapira
Closes#1070 from enothereska/kafka-3202-format-change-fly
apovzner becketqin please have a look if you can. Thanks.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Anna Povzner, Gwen Shapira
Closes#1059 from enothereska/kafka-3188-compatibility
Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and message.format.version = 0.9
Plus couple of variations of these tests using old/new consumer and no compression / snappy compression.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#980 from apovzner/kafka-3201-02
Patch by fpj and benstopford.
Author: flavio junqueira <fpj@apache.org>
Author: Flavio Junqueira <fpj@apache.org>
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Ben Stopford <benstopford@gmail.com>, Geoff Anderson <geoff@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#683 from fpj/KAFKA-2979
The core of this PR is to ensure we evaluate enabling security in a running cluster where we have different broker and client protocols.
Also in this PR are some improvements to the validation process in produce_consume_validate.py which make it easier to work out where missing messages have been lost:
- Fail fast if producer or consumer stop running.
- If messages go missing, check in the data files to see if the cause was data loss or the consumer missing messages.
- Make it possible for the ConsoleConsumer to log both what it consumed and when it consumed it (and enable this feature in produce_consume_validate tests)
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Gwen Shapira, Geoff Anderson
Closes#667 from benstopford/security-rolling_upgrade-additions
Split kafka logging into two levels - DEBUG and INFO, and do not collect DEBUG by default.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#657 from granders/KAFKA-2927-reduce-log-footprint
Partition re-assignment tests with and without broker failure.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>, Geoff Anderson <geoff@confluent.io>
Closes#655 from apovzner/kafka_2896
Tests rolling upgrade from PLAINTEXT to SSL
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Geoff Anderson, Ismael Juma
Closes#496 from benstopford/security-upgrade-test
Run tests with SSL, SASL_PLAINTEXT and SASL_SSL. Same security protocol is used for source and target Kafka.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Andreson, Ben Stopford
Closes#559 from rajinisivaram/KAFKA-2643
Restores control over log level in system test service class KafkaService.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ismael Juma, Ewen Cheslack-Postava
Closes#538 from granders/KAFKA-2820-systest-log-level
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Ben Stopford, Geoff Anderson, Guozhang Wang
Closes#432 from ewencp/kafka-2752-copycat-clean-bounce-test
This PR adds failover to simple end to end mirror maker test
Marked as WIP for 2 reasons:
- We may want to add a couple more test cases where kafka is being used to store offsets
- There appears to be a test failure in the hard failover case
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava
Closes#427 from granders/KAFKA-2258-mirrormaker-test
Run sanity check, replication tests and benchmarks with SASL/Kerberos using MiniKdc.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Anderson <geoff@confluent.io>, Jun Rao <junrao@gmail.com>
Closes#358 from rajinisivaram/KAFKA-2644
ewencp gwenshap
This needs some refactoring to avoid the duplicated code between replication test and upgrade test, but in shape for initial feedback.
I'm interested in feedback on the added `KafkaConfig` class and `kafka_props` file. This addition makes it:
- easier to attach different configs to different nodes (e.g. during broker upgrade process)
- easier to reason about the configuration of a particular node
Notes:
- in the default values in the KafkaConfig class, I removed many properties which were in kafka.properties before. This is because most of those properties were set to what is already the default value.
- when running non-trunk VerifiableProducer, I append the trunk tools jar to the classpath, and run it with the non-trunk kafka-run-class.sh script
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Dong Lin, Ewen Cheslack-Postava
Closes#229 from granders/KAFKA-1888-upgrade-test