- Moves all generated docs under /docs/generated
- Generates docs for Protocol, Errors, and ApiKeys
- Adds new protocol.html page
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Gwen Shapira
Closes#970 from granthenke/protocol-doc-wip
By using `getHostString` (introduced in Java 7) instead of `getHostName`.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jason Gustafson, Grant Henke
Closes#1030 from ijuma/kafka-3352-avoid-dns-reverse-look-ups
When invoking `gradle` on a recent version, it updates `gradlew.bat` to fix a typo. It's an annoyance at development time as it causes a diff on whatever branch one is working on.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1034 from ijuma/update-gradlew.bat
JAAS configuration may be set using other methods and hence the check for System property doesn't always match where the actual configuration used by Kafka is loaded from.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Sriharsha Chintalapani <harsha@hortonworks.com>, Flavio Junqueira <fpj@apache.org>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#967 from rajinisivaram/KAFKA-3279
Remove test cases testInvalidDefaultRange() and testInvalidDefaultString(). Defaults if not overridden will get checked on parse. Testing the defaults is unnecessary. This allows you to set that a parameter is required while setting a validator for that parameter. Added a test case testNullDefaultWithValidator that allows a null default with a validator for certain strings.
Author: Jeremy Custenborder <jcustenborder@gmail.com>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#936 from jcustenborder/KAFKA-3237
Per discussion with guozhangwang, `ignore` failing streams system tests until fix for KAFKA-3354 is checked in.
Author: Geoff Anderson <geoff@confluent.io>
Reviewers: Guozhang Wang
Closes#1031 from granders/ignore-streams-systest
This patch reuse max.in.flight.request.per.connection. When it equals to one, we take it as user wants order protection. The current approach is make sure there is only one batch per partition on the fly.
Author: Jiangjie Qin <becket.qin@gmail.com>
Reviewers: Aditya Auradkar <aauradkar@linkedin.com>, Jason Gustafson <jason@confluent.io>, Grant Henke <granthenke@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy.w@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#857 from becketqin/KAFKA-3197
Three main tests:
1. Setup: Producer (0.8) → Kafka Cluster → Consumer (0.8)
First rolling bounce: Set inter.broker.protocol.version = 0.8 and message.format.version = 0.8
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
2. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus, use latest (default) inter.broker.protocol.version and message.format.version
3. Setup: Producer (0.9) → Kafka Cluster → Consumer (0.9)
First rolling bounce: Set inter.broker.protocol.version = 0.9 and message.format.version = 0.9
Second rolling bonus: use inter.broker.protocol.version = 0.10 and message.format.version = 0.9
Plus couple of variations of these tests using old/new consumer and no compression / snappy compression.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#980 from apovzner/kafka-3201-02
…Response with o.a.k.c.requests equivalent
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Ismael Juma
Closes#927 from granthenke/offset-refactor
https://issues.apache.org/jira/browse/KAFKA-1476
Let me know if these kind of contributions should have their own requisite JIRA opened in advance.
Cheers..
Author: Christian Posta <christian.posta@gmail.com>
Reviewers: Gwen Shapira
Closes#945 from christian-posta/ceposta-tidy-up-consumer-groups-describe
* Include request id when parsing of request header fails
* Don't mute selector on a connection that was closed due to an error (otherwise a second exception is thrown)
* Throw appropriate exception from `ApiKeys.fromId` if invalid id is passed
* Fail fast in `AbstractRequest.getRequest` if we fail to handle an instance of `ApiKeys` (if this happens, it's a programmer error and the code in `getRequest` needs to be updated)
I ran into the top two issues while trying to figure out why a connection from a producer to a broker was failing (and it made things harder than necessary). While fixing them, I noticed the third and fourth issues.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Gwen Shapira
Closes#1017 from ijuma/kafka-3341-improve-error-handling-invalid-requests
In version of 0.8.2.1, the old consumer will provide the metrics reporter per-topic consumer metrics under group 'ConsumerTopicMetrics'. For example:
*.ConsumerTopicMetrics.clientId.[topic name].BytesPerSec.count
*.ConsumerTopicMetrics.clientId.[topic name].MessagesPerSec.count
These consumer metrics are useful since it helps us monitor consumer rate for each topic. But the new consumer(0.9.0.0) doesn't expose per topic metrics anymore, even though I did find sensor objects in consumer metrics object collecting per-topic metrics.
After investigation, I found that these sensors are not registering any KafkaMetrics.
Author: Yifan Ying <yying@fitbit.com>
Reviewers: Grant Henke, Jason Gustafson, Guozhang Wang
Closes#939 from happymap/KAFKA-3233
Stop scritps such as kafka-server-stop.sh log messages of kill command's error when processes aren't running.
This PR changes this message to "No kafka server to stop".
Author: Sasaki Toru <sasakitoa@nttdata.co.jp>
Reviewers: Gwen Shapira
Closes#971 from sasakitoa/stop_scripts_says_not_good_message
The fix basically ensures that the throttleTimeSensor is non-null before handing off to record the metric value. We also record the throttle time to 0 so that we don't recreate the sensor always.
Author: Aditya Auradkar <aauradkar@linkedin.com>
Reviewers: Jiangjie Qin <becket.qin@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#989 from auradkar/KAFKA-3310
Adds a gradle task to generate a report of outdate release dependencies:
`gradle dependencyUpdates`
Updates a few minor versions.
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Ismael Juma, Gwen Shapira
Closes#973 from granthenke/outdated-deps
Without this change `./gradlew releaseTarGz` (and its variants) will not include the RocksDB jar, which is required for Kafka Streams, in Kafka's `libs/` folder. The impact is that any Streams job will fail when it runs against a broker that was installed via a release tarball.
guozhangwang junrao : please review.
Author: Michael G. Noll <michael@confluent.io>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#1007 from miguno/trunk-rocksdb-fixes
Adjust the listeners property rather than the port. Following the original instructions would result in all of the brokers being started with the same listeners setting, and so fail to work.
Author: Duncan Sands <baldrick@free.fr>
Reviewers: Ismael Juma, Gwen Shapira
Closes#1002 from CunningBaldrick/quickstart
Added offsetBackingStore config to StandaloneConfig and DistributedConfig;
Added config for offset.storage.topic and config.storage.topic into DistributedConfig;
Author: jinxing <jinxing@fenbi.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#734 from ZoneMayor/trunk-KAFKA-2934
…ssage to assist with figuring this out.
Author: Gwen Shapira <cshapi@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#993 from gwenshap/KAFKA-2944
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Yasuhiro Matsuda <yasuhiro.matsuda@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#990 from guozhangwang/K3311
becketqin, when you get a chance, could you take a look at the patch?
Author: zhuchen1018 <amandazhu19620701@gmail.com>
Reviewers: Grant Henke <granthenke@gmail.com>, Jiangjie Qin <becket.qin@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#969 from zhuchen1018/KAFKA-3257
Also remove some unused imports.
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#992 from guozhangwang/KSExamples
* Change `MessageFormat.writeTo` to take a `ConsumerRecord`
* Change `MessageReader.readMessage()` to use `ProducerRecord`
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#972 from ijuma/kafka-3273-message-formatter-and-reader-resilient
The ability to specify a deserializer for keys and values was added in a recent commit (845c6eae1f), but it contained a few issues.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#987 from ijuma/console-consumer-cleanups
and remove TOTAL_RECORDS_TO_PROCESS
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#985 from ymatsuda/config_params
Observation: when doing "gradlew releaseTarGz" the streams jar was not included in the tarball. Adding a line to include it. ijuma guozhangwang could you please review. Thanks.
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#984 from enothereska/trunk