Multiple inflights means that when there are rolling bounces or other cluster instability, there is an increased likelihood of having previously tried batch expire in the accumulator. This is a fatal error
for a transactional producer, causing the `TransactionalMessageCopier` to exit. To work around this, we bump the request timeout. We can get rid of this when KIP-91 is merged.
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#4039 from apurvam/MINOR-bump-request-timeout-in-transactional-message-copier
With these changes, we are ensuring that the partitions being reassigned are from non-zero offsets. We also ensure that every message in the log has producerId and sequence number.
This means that it successfully reproduces https://issues.apache.org/jira/browse/KAFKA-6003.
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#4029 from apurvam/KAFKA-6016-add-idempotent-producer-to-reassign-partitions
Since we removed the unused `TRACE` option from `SecurityProtocol`, it now seems safer to expose it from `AuthenticationContext`. Additionally this patch exposes javadocs under security.auth and relocates the `Login` and `AuthCallbackHandler` to a non-public package.
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>, Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#3863 from hachikuji/use-security-protocol-in-auth-context
Adds new metrics to support health checks:
1. Error rates for each request type, per-error code
2. Request size and temporary memory size
3. Message conversion rate and time
4. Successful and failed authentication rates
5. ZooKeeper latency and status
6. Client version
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#3705 from rajinisivaram/KAFKA-5746-new-metrics
Here we introduce client and broker changes to support multiple inflight requests while still guaranteeing idempotence. Two major problems to be solved:
1. Sequence number management on the client when there are request failures. When a batch fails, future inflight batches will also fail with `OutOfOrderSequenceException`. This must be handled on the client with intelligent sequence reassignment. We must also deal with the fatal failure of some batch: the future batches must get different sequence numbers when the come back.
2. On the broker, when we have multiple inflights, we can get duplicates of multiple old batches. With this patch, we retain the record metadata for 5 older batches.
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#3743 from apurvam/KAFKA-5494-increase-max-in-flight-for-idempotent-producer
In the coordinator, we should check that 'shutdown' is not true before going to sleep waiting for the condition.
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Apurva Mehta <apurva@confluent.io>, Jason Gustafson <jason@confluent.io>
Closes#3755 from cmccabe/KAFKA-5806
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Rajini Sivaram <rajinisivaram@googlemail.com>
Closes#3699 from cmccabe/trogdor-review
Clean up includes:
- Switching try-catch-finally blocks to try-with-resources when possible
- Removing some seemingly unnecessary `SuppressWarnings` annotations
- Resolving some Java warnings
- Closing unclosed Closable objects
- Removing unused code
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Balint Molnar <balintmolnar91@gmail.com>, Guozhang Wang <wangguoz@gmail.com>, Matthias J. Sax <matthias@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
Closes#3222 from vahidhashemian/minor/code_cleanup_1706
With this patch, the `ProducePerfomance` tool can create transactions of differing durations.
This patch was used to to collect the initial set of benchmarks for transaction performance, documented here: https://docs.google.com/spreadsheets/d/1dHY6M7qCiX-NFvsgvaE0YoVdNq26uA8608XIh_DUpI4/edit#gid=282787170
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#3400 from apurvam/MINOR-add-transaction-size-to-producre-perf
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Colin P. Mccabe <cmccabe@confluent.io>, Jason Gustafson <jason@confluent.io>
Closes#3339 from ijuma/kafka-5275-admin-client-api-consistency
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Apurva Mehta <apurva@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#3340 from hachikuji/add-random-aborts-to-system-test
Before this patch, we would call `producerBatch.done` directly from the accumulator when expiring batches. This meant that we would not transition to the `ABORTABLE_ERROR` state in the transaction manager, allowing other transactional requests (including Commits!) to go through, even though the produce failed.
This patch modifies the logic so that we call `Sender.failBatch` on every expired batch, thus ensuring that the transaction state is accurate.
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>, Jason Gustafson <jason@confluent.io>
Closes#3252 from apurvam/KAFKA-5385-fail-transaction-if-batches-expire
This currently fails in multiple ways. One of which is most likely KAFKA-5355, where the concurrent consumer reads duplicates.
During broker bounces, the concurrent consumer misses messages completely. This is another bug.
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#3217 from apurvam/KAFKA-5366-add-concurrent-reads-to-transactions-system-test
It avoids the need to handle protocol downgrades and it's safe (i.e. it will never cause
the auto creation of topics).
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#3220 from ijuma/kafka-5374-admin-client-metadata
junrao added a config `--print.metrics` to control whether ProducerPerformance prints out metrics at the end of the test. If its okay, will add the code counterpart for consumer.
Author: huxi <huxi@zhenrongbao.com>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#2860 from amethystic/kafka-5068_print_metrics_in_perf_tests
This is a minor change but it helps to improve the log readability.
Author: Kamal C <kamal.chandraprakash@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#2709 from Kamal15/util
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Apurva Mehta <apurva@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#2684 from cmccabe/KAFKA-4895
Author: Matthias J. Sax <matthias@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2303 from mjsax/licenseHeader
Author: Maysam Yabandeh <myabandeh@dropbox.com>
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#2474 from ijuma/kafka-4039-deadlock-during-shutdown
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#2414 from cmccabe/KAFKA-4635
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#2390 from cmccabe/KAFKA-4630
Current implementation of ProducerPerformance creates static payload. This is not very useful in testing compression or when you want to test with production/custom payloads. So, we decided to add support for providing payload file as an input to producer perf test script.
We made the following changes:
1. Added support to provide a payload file which can have the list of payloads that you actually want to send.
2. Moved payload generation inside the send loop for cases when payload file is provided.
Following are the changes to how the producer-performance is evoked:
1. You must provide "--record-size" or "--payload-file" but not both. This is because, record size cannot be guaranteed when you are using custom events.
e.g. ./kafka-producer-perf-test.sh --topic test_topic --num-records 100000 --producer-props bootstrap.servers=127.0.0.1:9092 acks=0 buffer.memory=33554432 compression.type=gzip batch.size=10240 linger.ms=10 --throughput -1 --payload-file ./test_payloads --payload-delimiter ,
2. Earlier "--record-size" was a required config, now you must provide exactly one of "--record-size" or "--payload-file". Providing both will result in an error.
3. Support for an additional parameter "--payload-delimiter" has been added which defaults to "\n"
Author: Sandesh K <sandesh.karkera@flipkart.com>
Reviewers: dan norwood <norwood@confluent.io>, Jun Rao <junrao@gmail.com>
Closes#2158 from SandeshKarkera/PerfProducerChanges
moved streams application reset tool from tools to core
Author: Matthias J. Sax <matthias@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Damian Guy <damian.guy@gmail.com>, Guozhang Wang <wangguoz@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1685 from mjsax/moveResetTool
(cherry picked from commit f2405a73ea2dd4b636832b7f8729fb06a04de1d5)
Signed-off-by: Ewen Cheslack-Postava <me@ewencp.org>
Hi all,
This is my first commit to Kafka.
"msec / 1000" turns into sec, isn't it?
I just have fixed a variable name.
granders
Author: kota-uchida <kota-uchida@cybozu.co.jp>
Reviewers: Guozhang Wang <wangguoz@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1482 from uchan-nos/elapsed-ms
This actually removes joins altogether, as well as references to self.worker_threads, which is best left as an implementation detail in BackgroundThreadService.
This makes use of hachikuji 's recent ducktape patch, and updates ducktape dependency to 0.5.0.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1297 from granders/KAFKA-3581-systest-add-join-timeout
Even if a test calls stop() on console_consumer or verifiable_producer, it is still possible that producer/consumer will not shutdown cleanly, and will be killed forcefully after a timeout. It will be useful for some tests to know whether a clean shutdown happened or not. This PR adds methods to console_consumer and verifiable_producer to query whether clean shutdown happened or not.
hachikuji and/or granders Please review.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Jason Gustafson, Geoff Anderson, Gwen Shapira
Closes#1278 from apovzner/kafka-3597
* Use a fixed `Random` seed in `EndToEndLatency.scala` for determinism
* Add `compression_type` to and remove `consumer_fetch_max_wait` from `end_to_end_latency.py`. The latter was never used.
* Tweak logging of `end_to_end_latency.py` to be similar to `consumer_performance.py`.
* Add `compression_type` to `benchmark_test.py` methods and add `snappy` to `matrix` annotation
* Use randomly generated bytes from a restricted range for `ProducerPerformance` payload. This is a simple fix for now. It can be improved in the PR for KAFKA-3554.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1225 from ijuma/kafka-3558-add-compression_type-benchmark_test.py
Added CompressionTest that tests 4 producers, each using a different compression type and one not using compression.
Enabled VerifiableProducer to run producers with different compression types (passed in the constructor). This includes enabling each producer to output unique values, so that the verification process in ProduceConsumeValidateTest is correct (counts acks from all producers).
Also a fix for console consumer to raise an exception if it sees the incorrect consumer output (before we swallowed an exception, so was hard to debug the issue).
Author: Anna Povzner <anna@confluent.io>
Reviewers: Geoff Anderson, Jason Gustafson
Closes#958 from apovzner/kafka-3214
Note that KAFKA-3077 will be required to run these tests.
Author: Ashish Singh <asingh@cloudera.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#747 from SinghAsDev/KAFKA-3078
Summary of code changes
------------------------------------
1) Added a new Checkstyle rule to flag any code using star imports
2) Fixed ALL existing code violations using star imports
Testing
-----------
Local build was successful
ALL JUnits ran successfully on local.
ewencp - Request you to please review changes. Thank you !
I state that the contribution is my original work and I license the work to the project under the project's open source license.
Author: manasvigupta <manasvigupta@yahoo.co.in>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#700 from manasvigupta/KAFKA-3009
Gradle does not handle subprojects with the same name (top-level tools vs
connect/tools) properly, making the dependency impossible to express correctly
since we need to move the ThroughputThrottler class into the top level tools
project. Moving the current set of tools into the runtime jar works fine since
they are only used for system tests at the moment.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Gwen Shapira
Closes#512 from ewencp/kafka-2807-redux
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Ben Stopford, Geoff Anderson, Guozhang Wang
Closes#432 from ewencp/kafka-2752-copycat-clean-bounce-test