This KIP adds the following functionality related to SASL/OAUTHBEARER:
1) Allow clients (both brokers when SASL/OAUTHBEARER is the inter-broker protocol as well as non-broker clients) to flexibly retrieve an access token from an OAuth 2 authorization server based on the declaration of a custom login CallbackHandler implementation and have that access token transparently and automatically transmitted to a broker for authentication.
2) Allow brokers to flexibly validate provided access tokens when a client establishes a connection based on the declaration of a custom SASL Server CallbackHandler implementation.
3) Provide implementations of the above retrieval and validation features based on an unsecured JSON Web Token that function out-of-the-box with minimal configuration required (i.e. implementations of the two types of callback handlers mentioned above will be used by default with no need to explicitly declare them).
4) Allow clients (both brokers when SASL/OAUTHBEARER is the inter-broker protocol as well as non-broker clients) to transparently retrieve a new access token in the background before the existing access token expires in case the client has to open new connections.
AdminClient should backoff when retrying a Call. Fixed and added a unit test
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Dong Lin <lindong28@gmail.com>
Closes#5077 from hachikuji/admin-client-retry-backoff
consumer offset path in zookeeper should be /consumers/${group}/offsets/${topic}/${partition} instead of /consumers/${group}/offset/${topic}/${partition}. Added `s` to the word `offset`.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Manikumar Reddy O <manikumar.reddy@gmail.com>, Jun Rao <junrao@gmail.com>
When any metric (e.g. per-partition metric) is created or deleted,
registerMBean() is called which in turn calls getMBeanInfo().getClassName().
However, KafkaMbean.getMBeanInfo() instantiates an array of all sensors even
though we only need the class name. This costs a lot of CPU to register
sensors when consumer with large partition assignment starts. For example, it
takes 5 minutes to start a consumer with 35k partitions. This patch reduces the
consumer startup time seconds.
Author: radai-rosenblatt <radai.rosenblatt@gmail.com>
Reviewers: Satish Duggana <satish.duggana@gmail.com>, Dong Lin <lindong28@gmail.com>
Closes#5011 from radai-rosenblatt/fun-with-jmx
The Signal classes are not available in the compile classpath
if --release is used so we use reflection as a workaround.
As part of that moved the code to Java and added a simple
unit test.
Also disabled the signal handler if the IBM JDK is being used
due to KAFKA-6918.
Manually tested shutdown via ctrl+c and verified that
the message is printed.
* Removed Scala producers, request classes, kafka.tools.ProducerPerformance, encoders,
tests.
* Updated ConsoleProducer to remove Scala producer support (removed `BaseProducer`
and several options that are not used by the Java producer).
* Updated a few Scala consumer tests to use the new producer (including a minor
refactor of `produceMessages` methods in `TestUtils`).
* Updated `ClientUtils.fetchTopicMetadata` to use `SimpleConsumer` instead of
`SyncProducer`.
* Removed `TestKafkaAppender` as it looks useless and it defined an `Encoder`.
* Minor import clean-ups
No new tests added since behaviour should remain the same after these changes.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Manikumar Reddy O <manikumar.reddy@gmail.com>, Dong Lin <lindong28@gmail.com>
Closes#5045 from ijuma/kafka-6921-remove-old-producer
Fix the check, add unit test to verify the change, update `DynamicBrokerReconfigurationTest` to avoid dynamic keystore update in tests which are not expected to update keystores.
Avoid dependence on the internal __consumer_offsets topic to handle `listConsumerGroups()` since it unnecessarily requires users to have Describe access on an internal topic. Instead we query each broker independently. For most clusters, this amounts to the same thing since the default number of partitions for __consumer_offsets is 50. This also provides better encapsulation since it avoids exposing the use of __consumer_offsets, which gives us more flexibility in the future.
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Dong Lin <lindong28@gmail.com>
Closes#5007 from hachikuji/remove-admin-use-of-offsets-topic
Little back story on this. Was helping a user over email. This could be much easier to debug if we assume that the connector developer might not return valid configs. For example Intellij will generate a stub that returns a null. This was the case that inspired this JIRA.
Author: Jeremy Custenborder <jcustenborder@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3762 from jcustenborder/KAFKA-5807
These constructors should be public to allow users to write test cases using them. We follow a similar pattern for the other domain objects that we expose in `AdminClient` (e.g. `TopicDescription`).
Reviewers: Ismael Juma <ismael@juma.me.uk>
If the internal metadata request fails, we must reset the state inside `AdminClientMetadataManager` or we will be stuck indefinitely in the `UPDATE_PENDING` state and have no way to fetch new metadata.
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Dong Lin <lindong28@gmail.com>
Closes#5057 from hachikuji/fix-admin-client-metadata-update-failure
We no longer need them since we now require Java 8.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Andras Beni <andrasbeni@cloudera.com>, Manikumar Reddy O <manikumar.reddy@gmail.com>, Dong Lin <lindong28@gmail.com>
Closes#5049 from ijuma/remove-base64
* Set --source, --target and --release to 1.8.
* Build Scala 2.12 by default.
* Remove some conditionals in the build file now that Java 8
is the minimum version.
* Bump the version of Jetty, Jersey and Checkstyle (the newer
versions require Java 8).
* Fixed issues uncovered by the new version if Checkstyle.
* A couple of minor updates to handle an incompatible source
change in the new version of Jetty.
* Add dependency to jersey-hk2 to fix failing tests caused
by Jersey upgrade.
* Update release script to use Java 8 and to take into account
that Scala 2.12 is now built by default.
* While we're at it, bump the version of Gradle, Gradle plugins,
ScalaLogging, JMH and apache directory api.
* Minor documentation updates including the readme and upgrade
notes. A number of Streams Java 7 examples can be removed
subsequently.
https://issues.apache.org/jira/browse/KAFKA-6685
Added Exception message in `WorkerSinkTask.convertMessages` to distinguish message Key from Value during deserialization to Kafka connect format.
*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*
*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*
Author: Jagadesh Adireddi <adireddijagadesh@gmail.com>
Reviewers: Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#4765 from jadireddi/KAFKA-6685---log-message-should-distinguish-key-from-value
test_broker_type_bounce_at_start tries to validate that when the controller is down, the streams client will always fail trying to create the topic; with the current behavior of admin client it is actually not always true: the actual behavior depends on the admin client internals as well as when the controller becomes unavailable during the leader assign partitions phase. I'd suggest at least ignore this test for now until the admin client has more stable (personally I'd even suggest removing this test as its coverage benefits is smaller than its introduced issues to me).
Also adding a few more log4j entries as a result of investigating this issue.
Reviewers: Matthias J. Sax <matthias@confluent.io>
Due to #4644 the consumer connector logs will be much more clean with fewer "broker may not be available" entries. We need to reduce the required frequency from 100 to a smaller number.
I've thought about reducing to just 1, but it may still be transient (i.e. even if broker is starting up you may see a few entries) so I reduced it to 10.
Reviewers: Bill Bejeck <bill@confluent.io>, Matthias J. Sax <matthias@confluent.io>
* The consumer groups API should expose group state and coordinator information. This information is needed by administrative tools and scripts that access consume groups.
* The partition assignment will be empty when the group is rebalancing. Fix an issue where the adminclient attempted to deserialize this empty buffer.
* Remove nulls from the API and make all collections immutable.
* DescribeConsumerGroupsResult#all should return a result as expected, rather than Void
* Fix exception text for GroupIdNotFoundException, GroupNotEmptyException. It was being filled in as "The group id The group id does not exist was not found" and similar.
Reviewers: Attila Sasvari <asasvari@apache.org>, Andras Beni <andrasbeni@cloudera.com>, Dong Lin <lindong28@gmail.com>, Jason Gustafson <jason@confluent.io>
The type inference doesn't currently work for the join functions in Scala as it doesn't know yet the types of the given KStream[K, V] or KTable[K, V].
The fix here is to curry the joiner function. I personally prefer this notation but this also means it differs more from the Java API.
I believe the diff with the Java API is worth in this case as it's not only solving the type inference but also fits better the Scala way of coding (ex: fold).
Moreover any Scala dev will bug and spend little time on these functions trying to understand why the type inference is not working and then get frustrated to be obliged to be explicit here where it's not harmful to be inferred.
Reviewers: Debasish Ghosh <dghosh@acm.org>, Guozhang Wang <guozhang@confluent.io>, Ismael Juma <ismael@juma.me.uk>
This is a follow-up to #5022 which added documentation to the Processor
interface. This commit adds similar documentation to Transformer and
ValueTransformer.
Also, s/processor/transformer/ in the close() docs.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
This is a change to improve resource cleanup for sink tasks and source tasks. Now `Task.stop()` is called from both `WorkerSinkTask.close()` and `WorkerSourceTask.close()`.
It is called from `WorkerXXXTask.close()` since this method is called in the `finally` block of `WorkerTask.run()`, and Connect developers use `stop()` to clean up resources.
Author: Robert Yokota <rayokota@gmail.com>
Reviewers: Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5020 from rayokota/K6566-improve-connect-resource-cleanup
The wakeup-based strategy caused more problems than it
solved, so we'll instead focus on KIP-266.
Revert commit 2d8049b.
Keep the metrics addition and the new test util.
Also keep the tests for shutdown, although they must be ignored until
poll(Duration) is done in the scope of KIP-266.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
1. In InternalTopologyBuilder#topicGroups, which is used in StreamsPartitionAssignor, look for book-kept storeToChangelogTopic map before creating a new internal changelog topics. In this way if the source KTable is created, its source topic stored in storeToChangelogTopic will be used.
2. Added unit test (confirmed that without 1) it will fail).
3. MINOR: removed TODOs that are related to removed KStreamBuilder.
4. MINOR: removed TODOs in StreamsBuilderTest util functions and replaced with TopologyWrapper.
5. MINOR: removed StreamsBuilderTest#testFrom as it is already covered by TopologyTest#shouldNotAllowToAddSourcesWithSameName, plus it requires KStreamImpl.SOURCE_NAME which should be a package private field of the KStreamImpl.
Reviewers: John Roesler <john@confluent.io>, Bill Bejeck <bill@confluent.io>, Matthias
J. Sax <matthias@confluent.io>
We would like to also export the producer metrics from StreamThread just like consumer metrics, so that we could gain more visibility of stream application. The approach is to pass in the threadProducer into the StreamThread so that we could export its metrics in dynamic.
Note that this is a pure internal change that doesn't require a KIP, and in the future we also want to export admin client metrics. A followup KIP for admin client will be created once this is merged.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
This patch adds a few metrics that are useful for monitoring controller health. See KIP-237 for more detail.
Author: Dong Lin <lindong28@gmail.com>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#4392 from lindong28/KAFKA-3473
1. Remove TopologyBuilder, TopologyBuilderException, KStreamBuilder,
2. Completed the leftover work of https://issues.apache.org/jira/browse/KAFKA-5660, when we remove TopologyBuilderException.
3. Added MockStoreBuilder to replace MockStateStoreSupplier, remove all XXStoreSupplier except StateStoreSupplier as it is still referenced in the logical streams graph.
4. Minor: rename KStreamsFineGrainedAutoResetIntegrationTest.java to FineGrainedAutoResetIntegrationTest.java.
Reviewers: Matthias J. Sax <matthias@confluent.io>