Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Grant Henke, Gwen Shapira
Closes#1100 from ijuma/kafka-3426-invalid-protocol-type-errors-invalid-sizes
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1108 from hachikuji/KAFKA-3412
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jun Rao <junrao@gmail.com>
Closes#1091 from granthenke/fetch-error
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Larkin Lowrey <llowrey@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#1103 from ijuma/kafka-3378-follow-up
This is a different implementation to the one in #1085 by Larkin Lowrey (llowrey). The hard part here was actually finding the problem and all credit goes to llowrey.
This PR also fixes our handling of `finishConnect` (we now check the return value).
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao
Closes#1094 from ijuma/KAFKA-3378-instantly-connecting-socket-channels
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <me@ewencp.org>, Jun Rao <junrao@gmail.com>
Closes#1064 from hachikuji/KAFKA-3394
A new consumer config option 'exclude.internal.topics' was added to
allow excluding internal topics when wildcards are used to specify
consumers.
The new option takes a boolean value, with a default 'false' value (i.e.
no exclusion).
This patch is co-authored with rajinisivaram edoardocomar mimaison
Author: edoardo <ecomar@uk.ibm.com>
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Ismael Juma, Jun Rao, Gwen Shapira
Closes#1082 from edoardocomar/KAFKA-2832
This contribution is my original work, and I license it under the project's open source license.
CC jkreps
Author: Drausin Wulsin <daedalus2718@gmail.com>
Author: John Doe <daedalus2718@gmail.com>
Reviewers: Jason Gustafson
Closes#1055 from drausin/bugfix/consumer-records-iterator
This is a KIP-42 followup.
Currently, If sending the record fails before it gets to the server, ProducerInterceptor.onAcknowledgement() is called with metadata == null, and non-null exception. However, it is useful to pass topic and partition, if known, to ProducerInterceptor.onAcknowledgement() as well. This patch ensures that ProducerInterceptor.onAcknowledgement() gets record metadata with topic and maybe partition. If partition is not set in 'record' and KafkaProducer.send() fails before partition gets assigned, then ProducerInterceptor.onAcknowledgement() gets RecordMetadata with partition == -1. Only time when ProducerInterceptor.onAcknowledgement() gets null record metadata is when the client passes null record to KafkaProducer.send().
Author: Anna Povzner <anna@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ashish Singh <asingh@cloudera.com>, Jun Rao <junrao@gmail.com>
Closes#1015 from apovzner/kip42-3
Please see https://cwiki.apache.org/confluence/display/KAFKA/KIP-36+Rack+aware+replica+assignment for the overall design.
The update to TopicMetadataRequest/TopicMetadataResponse will be done in a different PR.
Author: Allen Wang <awang@netflix.com>
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>, Grant Henke <granthenke@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#132 from allenxwang/KAFKA-1215
Added topic-partition information to the exception message on batch expiry in RecordAccumulator
Author: MayureshGharat <gharatmayuresh15@gmail.com>
Reviewers: Gwen Shapira, Lin Dong, Ismael Juma
Closes#695 from MayureshGharat/kafka-3013
* Fix and suppress number of unchecked warnings (except for Kafka Streams)
* Add `SafeVarargs` annotation to fix warnings
* Suppress unfixable deprecation warnings
* Replace deprecated by non-deprecated usage where possible
* Avoid reflective calls via structural types in Scala
* Tweak compiler settings for scalac and javac
Once we drop Java 7 and Scala 2.10, we can tweak the compiler settings further so that they warn us about more things.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Grant Henke, Gwen Shapira, Guozhang Wang
Closes#1042 from ijuma/kafka-3375-suppress-depreccated-tweak-compiler
- Moves all generated docs under /docs/generated
- Generates docs for Protocol, Errors, and ApiKeys
- Adds new protocol.html page
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Gwen Shapira
Closes#970 from granthenke/protocol-doc-wip
By using `getHostString` (introduced in Java 7) instead of `getHostName`.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jason Gustafson, Grant Henke
Closes#1030 from ijuma/kafka-3352-avoid-dns-reverse-look-ups
JAAS configuration may be set using other methods and hence the check for System property doesn't always match where the actual configuration used by Kafka is loaded from.
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Sriharsha Chintalapani <harsha@hortonworks.com>, Flavio Junqueira <fpj@apache.org>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#967 from rajinisivaram/KAFKA-3279
Remove test cases testInvalidDefaultRange() and testInvalidDefaultString(). Defaults if not overridden will get checked on parse. Testing the defaults is unnecessary. This allows you to set that a parameter is required while setting a validator for that parameter. Added a test case testNullDefaultWithValidator that allows a null default with a validator for certain strings.
Author: Jeremy Custenborder <jcustenborder@gmail.com>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#936 from jcustenborder/KAFKA-3237
This patch reuse max.in.flight.request.per.connection. When it equals to one, we take it as user wants order protection. The current approach is make sure there is only one batch per partition on the fly.
Author: Jiangjie Qin <becket.qin@gmail.com>
Reviewers: Aditya Auradkar <aauradkar@linkedin.com>, Jason Gustafson <jason@confluent.io>, Grant Henke <granthenke@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy.w@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#857 from becketqin/KAFKA-3197
* Include request id when parsing of request header fails
* Don't mute selector on a connection that was closed due to an error (otherwise a second exception is thrown)
* Throw appropriate exception from `ApiKeys.fromId` if invalid id is passed
* Fail fast in `AbstractRequest.getRequest` if we fail to handle an instance of `ApiKeys` (if this happens, it's a programmer error and the code in `getRequest` needs to be updated)
I ran into the top two issues while trying to figure out why a connection from a producer to a broker was failing (and it made things harder than necessary). While fixing them, I noticed the third and fourth issues.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Gwen Shapira
Closes#1017 from ijuma/kafka-3341-improve-error-handling-invalid-requests
In version of 0.8.2.1, the old consumer will provide the metrics reporter per-topic consumer metrics under group 'ConsumerTopicMetrics'. For example:
*.ConsumerTopicMetrics.clientId.[topic name].BytesPerSec.count
*.ConsumerTopicMetrics.clientId.[topic name].MessagesPerSec.count
These consumer metrics are useful since it helps us monitor consumer rate for each topic. But the new consumer(0.9.0.0) doesn't expose per topic metrics anymore, even though I did find sensor objects in consumer metrics object collecting per-topic metrics.
After investigation, I found that these sensors are not registering any KafkaMetrics.
Author: Yifan Ying <yying@fitbit.com>
Reviewers: Grant Henke, Jason Gustafson, Guozhang Wang
Closes#939 from happymap/KAFKA-3233
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Yasuhiro Matsuda <yasuhiro.matsuda@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#990 from guozhangwang/K3311
and remove TOTAL_RECORDS_TO_PROCESS
guozhangwang
Author: Yasuhiro Matsuda <yasuhiro@confluent.io>
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Closes#985 from ymatsuda/config_params
This is my original work and I license the work to the project under the project's open source license.
Author: Richard Whaling <rwhaling@spantree.net>
Reviewers: Jason Gustafson, Gwen Shapira
Closes#968 from rwhaling/docs/kafkaconsumer-heartbeat-doc-improvement
Author: Tom Lee <github@tomlee.co>
Reviewers: Onur Karaman <okaraman@linkedin.com>, Jiangjie Qin <jiangjie@linkedin.com>, Grant Henke <ghenke@cloudera.com>, Jason Gustafson <jason@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Closes#962 from hachikuji/KAFKA-2698
This PR includes a number of clean-ups:
* Code style
* Documentation wording improvements
* Efficiency improvements
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#943 from ijuma/kafka-3259-kip-31-32-clean-ups
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#931 from hachikuji/KAFKA-3007
Author: Frank Scholten <frank@frankscholten.nl>
Reviewers: Eno Thereska <eno.thereska@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#941 from frankscholten/tests/cluster-connection-states
See KIP-31 and KIP-32 for details.
A few notes on the patch:
1. This patch implements KIP-31 and KIP-32. The patch includes features in both KAFKA-3025, KAFKA-3026 and KAFKA-3036
2. All unit tests passed.
3. The unit tests were run with new and old message format.
4. When message format conversion occurs during consumption, the consumer will not be able to detect the message size too large situation. I did not try to fix this because the situation seems rare and only happen during migration phase.
Author: Jiangjie Qin <becket.qin@gmail.com>
Author: Ismael Juma <ismael@juma.me.uk>
Author: Jiangjie (Becket) Qin <becket.qin@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Anna Povzner <anna@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#764 from becketqin/KAFKA-3025
Author: zhuchen1018 <amandazhu19620701@gmail.com>
Reviewers: Dong Lin <lindong28@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
Closes#911 from zhuchen1018/KAFKA-2757
…ent ID
- Adds NULLABLE_STRING Type to the protocol
- Changes client_id in the REQUEST_HEADER to NULLABLE_STRING with a default of ""
- Fixes server handling of invalid ApiKey request and other invalid requests
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy.w@gmail.com>
Closes#866 from granthenke/null-clientid
Remove the batch from the RecordAccumulator once its closed while aborting batches. Make sure we don't accept new batch appends to RecordAccumulator while the producer is being closed.
Author: Mayuresh Gharat <mgharat@mgharat-ld1.linkedin.biz>
Reviewers: Jiangjie Qin, Ismael Juma, Guozhang Wang
Closes#825 from MayureshGharat/KAFKA-3147
This is the most of the KIP-42: Producer and consumer interceptor. (Except exposing CRC and record sizes to the interceptor, which is coming as a separate PR; tracked by KAFKA-3196).
This PR includes:
1. Add ProducerInterceptor interface and call its callbacks from appropriate places in Kafka Producer.
2. Add ConsumerInterceptor interface and call its callbacks from appropriate places in Kafka Consumer.
3. Add unit tests for interceptor changes
4. Add integration test for both mutable consumer and producer interceptors.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Jason Gustavson, Ismael Juma, Gwen Shapira
Closes#854 from apovzner/kip42
Added an example clarifying the correct way to use explicit offsets with commitSync().
Author: Adam Kunicki <adam@streamsets.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#850 from kunickiaj/KAFKA-3191
The fix itself is simple.
Some explanation on unit tests. Currently we the vast majority of unit test is running with uncompressed messages. I was initially thinking about run all the tests using compressed messages. But it seems uncompressed messages are necessary in a many test cases because we need the bytes sent and appended to the log to be predictable. In most of other cases, it does not matter whether the message is compressed or not, and compression will slow down the unit test. So I just added one method in the BaseConsumerTest to send compressed messages whenever we need it.
Author: Jiangjie Qin <becket.qin@gmail.com>
Reviewers: Aditya Auradkar <aauradkar@linkedin.com>, Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy.w@gmail.com>
Closes#842 from becketqin/KAFKA-3179
KAFKA-3198: Ticket Renewal Thread exits prematurely due to inverted comparison
The >= should be < since we are actually able to renew if the renewTill time is later than the current ticket expiration.
Author: Adam Kunicki <adam@streamsets.com>
Reviewers: Ismael Juma, Gwen Shapira
Closes#858 from kunickiaj/KAFKA-3198