…list is not supported for new consumers
Author: Ashish Singh <asingh@cloudera.com>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#923 from SinghAsDev/KAFKA-3243
This PR includes a number of clean-ups:
* Code style
* Documentation wording improvements
* Efficiency improvements
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#943 from ijuma/kafka-3259-kip-31-32-clean-ups
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#931 from hachikuji/KAFKA-3007
KAFKA-3242: minor rename / logging change to references to 'adding partitions' to indicate 'modifying partitions'
Author: Ben Stopford <benstopford@gmail.com>
Reviewers: Grant Henke
Closes#924 from benstopford/small_changes
Author: zhuchen1018 <amandazhu19620701@gmail.com>
Reviewers: Grant Henke <granthenke@gmail.com>, Joel Koshy <jjkoshy.w@gmail.com>, Dong Lin <lindong28@gmail.com>
Closes#935 from zhuchen1018/minor-remove-unused-imports
See KIP-31 and KIP-32 for details.
A few notes on the patch:
1. This patch implements KIP-31 and KIP-32. The patch includes features in both KAFKA-3025, KAFKA-3026 and KAFKA-3036
2. All unit tests passed.
3. The unit tests were run with new and old message format.
4. When message format conversion occurs during consumption, the consumer will not be able to detect the message size too large situation. I did not try to fix this because the situation seems rare and only happen during migration phase.
Author: Jiangjie Qin <becket.qin@gmail.com>
Author: Ismael Juma <ismael@juma.me.uk>
Author: Jiangjie (Becket) Qin <becket.qin@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Anna Povzner <anna@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#764 from becketqin/KAFKA-3025
Author: zhuchen1018 <amandazhu19620701@gmail.com>
Reviewers: Dong Lin <lindong28@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
Closes#911 from zhuchen1018/KAFKA-2757
…tionListener introduced as part of KAFKA-2211
Author: Parth Brahmbhatt <brahmbhatt.parth@gmail.com>
Reviewers: Flavio Junqueira <fpj@apache.org>, Ismael Juma <ismael@juma.me.uk>, Sriharsha Chintalapani <mail@harsha.io>
Closes#679 from Parth-Brahmbhatt/KAFKA-2547 and squashes the following commits:
1722c76 [Parth Brahmbhatt] Addressing review comments.
376f77d [Parth Brahmbhatt] Merge branch 'trunk' of http://git-wip-us.apache.org/repos/asf/kafka into KAFKA-2547
a13b963 [Parth Brahmbhatt] Addressing comments from Reviewers.
1007137 [Parth Brahmbhatt] KAFKA-2547: Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211
…uests equivalent
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Sriharsha Chintalapani <mail@harsha.io>
Closes#896 from granthenke/update-metadata and squashes the following commits:
2eb5d59 [Grant Henke] Address reviews
497258d [Grant Henke] KAFKA-2508: Replace UpdateMetadata{Request,Response} with o.a.k.c.requests equivalent
…ent ID
- Adds NULLABLE_STRING Type to the protocol
- Changes client_id in the REQUEST_HEADER to NULLABLE_STRING with a default of ""
- Fixes server handling of invalid ApiKey request and other invalid requests
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy.w@gmail.com>
Closes#866 from granthenke/null-clientid
This is the most of the KIP-42: Producer and consumer interceptor. (Except exposing CRC and record sizes to the interceptor, which is coming as a separate PR; tracked by KAFKA-3196).
This PR includes:
1. Add ProducerInterceptor interface and call its callbacks from appropriate places in Kafka Producer.
2. Add ConsumerInterceptor interface and call its callbacks from appropriate places in Kafka Consumer.
3. Add unit tests for interceptor changes
4. Add integration test for both mutable consumer and producer interceptors.
Author: Anna Povzner <anna@confluent.io>
Reviewers: Jason Gustavson, Ismael Juma, Gwen Shapira
Closes#854 from apovzner/kip42
Producers that are not closed auto-create topics in subsequent tests when Kafka server port is reused. Added missing close().
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#882 from rajinisivaram/KAFKA-3217
Inference sometimes fails for this case.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Eno Thereska <eno.thereska@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#885 from ijuma/use-explicit-type-in-acl-command
Provides a more actionable and descriptive error message.
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Ashish Singh <asingh@cloudera.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#847 from granthenke/broker-id-error
Mirror maker doesn't commit offset with new consumer enabled when data volume is low. This is caused by infinite loop in ```receive()``` which would never jump out of loop if no data coming
Author: Tao Xiao <xiaotao183@gmail.com>
Reviewers: Ismael Juma, Jason Gustafson
Closes#821 from xiaotao183/KAFKA-3157
The fix itself is simple.
Some explanation on unit tests. Currently we the vast majority of unit test is running with uncompressed messages. I was initially thinking about run all the tests using compressed messages. But it seems uncompressed messages are necessary in a many test cases because we need the bytes sent and appended to the log to be predictable. In most of other cases, it does not matter whether the message is compressed or not, and compression will slow down the unit test. So I just added one method in the BaseConsumerTest to send compressed messages whenever we need it.
Author: Jiangjie Qin <becket.qin@gmail.com>
Reviewers: Aditya Auradkar <aauradkar@linkedin.com>, Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy.w@gmail.com>
Closes#842 from becketqin/KAFKA-3179
When the disk (raid with caches dir) dies on a Kafka broker, typically the filesystem gets mounted into read-only mode, and hence when Kafka tries to read the disk, they'll get a FileNotFoundException with the read-only errno set (EROFS). However, as long as there is no produce request received, hence no writes attempted on the disks, Kafka will not exit on such FATAL error and keep on throwing exception : java.io.FileNotFoundException
In this case, the JVM should stop if the underlying file system goes in to Read only mode.
Author: MayureshGharat <gharatmayuresh15@gmail.com>
Reviewers: Lin Dong, Gwen Shapira, Ismael Juma, Guozhang Wang
Closes#698 from MayureshGharat/KAFKA-1860
Also fixed a couple of other tests with the same issue.
This is my original work and I license the work to the project under the project's open source license
Author: Kim Christensen <kich@mvno.dk>
Reviewers: Ismael Juma
Closes#828 from kichristensen/KAFKA-2676
I noticed them while looking at the recent commit:
87eccb9a3b
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Grant Henke, Guozhang Wang
Closes#829 from ijuma/fix-comments-in-replica-fetcher-thread
Also:
* Fixed a bug in `createSslConfig` where we were always generating a
keystore even if `useClientCert` was false and `mode` was `Mode.CLIENT`.
* Pass `numRecords` to `consumerRecords` and other clean-ups (formatting and scaladoc).
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Sriharsha Chintalapani <harsha@hortonworks.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
Closes#827 from ijuma/kafka-3166-disable-ssl-auth-sasl-ssl and squashes the following commits:
8265221 [Ismael Juma] Pass `numRecords` to `consumerRecords` and clean-ups.
a73db89 [Ismael Juma] SSL client authentication should be disabled for SASL_SSL security protocol
The commit here improves the logging in SimpleConsumer to log the real reason why a reconnect was attempted. Relates to https://issues.apache.org/jira/browse/KAFKA-2221.
The same patch was submitted a while back but wasn't merged because SimpleConsumer was considered deprecated and users' aren't expected to use it. However, more and more users in the user mailing list are running into this log message but have no way to understand what the root cause is. So IMO, this change still adds value to such users who are using SimpleConsumer.
Author: Jaikiran Pai <jaikiran.pai@gmail.com>
Reviewers: Jiangjie Qin, Ismael Juma, Guozhang Wang
Closes#138 from jaikiran/kafka-2221
All three defects have the same root cause.
Root cause is ClientUtils.fetchTopicMetadata returns the BrokerEndPoints in a non-deterministic order, so we need to sort the expected endpoints and the received endpoints so we can correctly compare them.
Author: Denise Fernandez <dcbfernandez@gmail.com>
Reviewers: Ismael Juma, Grant Henke, Guozhang Wang
Closes#822 from rowdyrabbit/KAFKA-3103
* Add quotes to `$` in shell scripts
This is necessary for correct processing of quotes in the
user command.
* Minor improvements to AclCommand messages
* Use a principal with a space in `SslEndToEndAuthorizationTest`
This passed without any other changes, but good avoid regressions.
* Clean-up `TestSslUtils`:
Remove unused methods, fix unnecessary verbosity and don't set security.protocol (it should be done at a higher-level).
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Grant Henke <granthenke@gmail.com>, Jun Rao <junrao@gmail.com
Closes#818 from ijuma/kafka-3152-kafka-acl-space-in-principal
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Grant Henke <granthenke@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#773 from ijuma/kafka-3100-create-broker-version-check
Author: Konrad <konkalita@gmail.com>
Author: konradkalita <konkalita@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jun Rao <junrao@gmail.com>
Closes#749 from konradkalita/kafka-3076
Fix PatternSyntaxException and hand caused by it in MirrorMaker on passing invalid java regex string as whitelist
Author: Ashish Singh <asingh@cloudera.com>
Reviewers: Grant Henke, Gwen Shapira
Closes#805 from SinghAsDev/KAFKA-3140
TopicCommand provide a tool to add partitions for existing topics. It try to find the startIndex from existing partitions. There's a minor flaw in this process, it try to use the first partition fetched from zookeeper as the start partition, and use the first replica id in this partition as the startIndex.
One thing, the first partition fetched from zookeeper is not necessary to be the start partition. As partition id begin from zero, we should use partition with id zero as the start partition.
The other, broker id does not necessary begin from 0, so the startIndex is not necessary to be the first replica id in the start partition.
Author: chenshangan <chenshangan@meituan.com>
Reviewers: Guozhang Wang
Closes#329 from shangan/trunk-KAFKA-2146
This PR replaces all occurrences of kafka.api.ProducerRequest/ProducerResponse by their common equivalents.
Author: David Jacot <david.jacot@gmail.com>
Reviewers: Grant Henke <granthenke@gmail.com>, Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#110 from dajac/KAFKA-2071
Provides a configuration to opt out of broker id generation.
Author: Grant Henke <granthenke@gmail.com>
Reviewers: Gwen Shapira
Closes#762 from granthenke/id-generation
It behaves better on Windows and provides more useful error messages.
Also:
* Minor inconsistency fix in `kafka.server.OffsetCheckpoint`.
* Remove delete from `streams.state.OffsetCheckpoint` constructor (similar to the change in `kafka.server.OffsetCheckpoint` in 836cb19633 (diff-2503b32f29cbbd61ed8316f127829455L29)).
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#771 from ijuma/kafka-3105-use-atomic-move-with-fallback-instead-of-rename
Follow up PR as per comments in the ticket.
junrao It should be correct now as `curBrokers` included only live brokers and live/dead brokers are computed based on it. Could you take a look when you have time?
Author: David Jacot <david.jacot@gmail.com>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#756 from dajac/KAFKA-3085
Remove deletion of tmp file in `OffsetCheckpoint`'s constructor. This delete causes unintuitive behaviour like `LogRecoveryTest` causing a `System.exit` because the test creates an instance of `OffsetCheckpoint` in order to call `read()` on it (while unexpectedly deleting a file being written by another instance of `OffsetCheckpoint`).
Also:
* Improve error-handling in `OffsetCheckpoint`
* Also include minor performance improvements in `read()`
* Minor clean-ups to `ReplicaManager` and `LogRecoveryTest`
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#759 from ijuma/kafka-3063-log-recovery-test-exits-jvm
I'm also fixing a bug in the testChroot test case.
Author: Flavio Junqueira <fpj@apache.org>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#736 from fpj/KAFKA-3069