* Use a fixed `Random` seed in `EndToEndLatency.scala` for determinism
* Add `compression_type` to and remove `consumer_fetch_max_wait` from `end_to_end_latency.py`. The latter was never used.
* Tweak logging of `end_to_end_latency.py` to be similar to `consumer_performance.py`.
* Add `compression_type` to `benchmark_test.py` methods and add `snappy` to `matrix` annotation
* Use randomly generated bytes from a restricted range for `ProducerPerformance` payload. This is a simple fix for now. It can be improved in the PR for KAFKA-3554.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1225 from ijuma/kafka-3558-add-compression_type-benchmark_test.py
Author: Ismael Juma <ismael@juma.me.uk>
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Geoff Anderson <geoff@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1173 from ijuma/kafka-3490-multiple-version-support-perf-tests
This PR: https://github.com/apache/kafka/pull/958 fixed the use of prop_file in the situation when we have multiple producers (before, every producer will add to the config). However, it assumes that self.prop_file is initially "". This is correct for all existing tests, but it precludes us from extending verifiable producer and adding more properties to the producer config (same as console consumer).
This is a small PR to change the behavior to the original, but also make verifiable producer use prop_file method to be consistent with console consumer.
Also few more fixes to verifiable producer came up during the review:
-- fixed each_produced_at_least() method
-- more straightforward use of compression types
granders please review.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Geoff Anderson <geoff@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1192 from apovzner/fix_verifiable_producer
This also fixes KAFKA-3453 and KAFKA-2866.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Gwen Shapira
Closes#1155 from ijuma/kafka-3475-introduce-our-minikdc
Note: This goes only to trunk. 0.10.0 branch will need a separate PR with different versions.
Author: Gwen Shapira <cshapi@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1109 from gwenshap/minor-fix-version-trunk
ewencp gwenshap granders could you have a look please? Thanks.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confuent.io>
Closes#1096 from enothereska/systest-hotfix-name
becketqin apovzner please have a look. becketqin the test fails when the producer and consumer are 0.9.x and the message format changes on the fly.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Ewen Cheslack-Postava, Ismael Juma, Gwen Shapira
Closes#1070 from enothereska/kafka-3202-format-change-fly
apovzner becketqin please have a look if you can. Thanks.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Anna Povzner, Gwen Shapira
Closes#1059 from enothereska/kafka-3188-compatibility
becketqin have a look if this looks reasonable to you. Thanks.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Geoff Anderson <geoff@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1051 from enothereska/kafka-3371
Per discussion with guozhangwang, `ignore` failing streams system tests until fix for KAFKA-3354 is checked in.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Guozhang Wang
Closes#1031 from granders/ignore-streams-systest
Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and message.format.version = 0.9
Plus couple of variations of these tests using old/new consumer and no compression / snappy compression.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#980 from apovzner/kafka-3201-02
Added CompressionTest that tests 4 producers, each using a different compression type and one not using compression.
Enabled VerifiableProducer to run producers with different compression types (passed in the constructor). This includes enabling each producer to output unique values, so that the verification process in ProduceConsumeValidateTest is correct (counts acks from all producers).
Also a fix for console consumer to raise an exception if it sees the incorrect consumer output (before we swallowed an exception, so was hard to debug the issue).
Author: Anna Povzner <anna@confluent.io>
Reviewers: Geoff Anderson, Jason Gustafson
Closes#958 from apovzner/kafka-3214
guozhangwang
* add table aggregate to the system test
* actually create change log partition replica
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#966 from ymatsuda/enh_systest
Also update `kafka-merge-pr.py` and `tests/kafkatest/__init__.py`.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#963 from ijuma/update-trunk-0.10.0.0-SNAPSHOT
The MessageFormatter being used was only introduced as of 0.9.0.0. The Kafka
version in some tests is changed dynamically, sometimes from trunk back to an
earlier version, so this option must be set based on the version used when the
service is started, not when it is created.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Geoff Anderson, Ismael Juma, Grant Henke
Closes#770 from ewencp/kafka-3080-system-test-console-consumer-version-failure
Note that KAFKA-3077 will be required to run these tests.
Author: Ashish Singh <asingh@cloudera.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#747 from SinghAsDev/KAFKA-3078
Patch by fpj and benstopford.
Author: flavio junqueira <fpj@apache.org>
Author: Flavio Junqueira <fpj@apache.org>
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Ben Stopford <benstopford@gmail.com>, Geoff Anderson <geoff@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#683 from fpj/KAFKA-2979
The core of this PR is to ensure we evaluate enabling security in a running cluster where we have different broker and client protocols.
Also in this PR are some improvements to the validation process in produce_consume_validate.py which make it easier to work out where missing messages have been lost:
- Fail fast if producer or consumer stop running.
- If messages go missing, check in the data files to see if the cause was data loss or the consumer missing messages.
- Make it possible for the ConsoleConsumer to log both what it consumed and when it consumed it (and enable this feature in produce_consume_validate tests)
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Gwen Shapira, Geoff Anderson
Closes#667 from benstopford/security-rolling_upgrade-additions
Split kafka logging into two levels - DEBUG and INFO, and do not collect DEBUG by default.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#657 from granders/KAFKA-2927-reduce-log-footprint
Partition re-assignment tests with and without broker failure.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>, Geoff Anderson <geoff@confluent.io>
Closes#655 from apovzner/kafka_2896
Fixed version sanity checks by updated kafkatest version to match kafka version
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#656 from granders/KAFKA-2928-fix-version-sanity-checks
I originally tried to solve the problem by using tempfile, and creating and using scp() utility method that created a random local temp file every time it was called. However, it required passing miniKdc object to SecurityConfig setup_node which looked very invasive, since many tests use this method. Here is the PR for that, which I think we will close: https://github.com/apache/kafka/pull/609
This change is the least invasive change to solve conflicts between multiple tests jobs.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Geoff Anderson
Closes#610 from apovzner/kafka_2851_01
For SSL and SASL replication tests, set security protocol for clients as well.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ben Stopford <benstopford@gmail.com>, Geoff Anderson <geoff@confluent.io>, Jun Rao <junrao@gmail.com>
Closes#563 from rajinisivaram/KAFKA-2642
Tests rolling upgrade from PLAINTEXT to SSL
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Geoff Anderson, Ismael Juma
Closes#496 from benstopford/security-upgrade-test
Run tests with SSL, SASL_PLAINTEXT and SASL_SSL. Same security protocol is used for source and target Kafka.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Geoff Andreson, Ben Stopford
Closes#559 from rajinisivaram/KAFKA-2643