The initial PR for KIP-290 #5117 added a `ResourceNameType` field to the Java and Scala `Resource` classes to introduce the concept of Prefixed ACLS. This does not make a lot of sense as these classes are meant to represent cluster resources, which would not have a concept of 'name type'. This work has not been released yet, so we have time to change it.
This PR looks to refactor the code to remove the name type field from the Java `Resource` class. (The Scala one will age out once KIP-290 is done, and removing it would involve changes to the `Authorizer` interface, so this class was not touched).
This is achieved by replacing the use of `Resource` with `ResourcePattern` and `ResourceFilter` with `ResourceFilterPattern`. A `ResourcePattern` is a combination of resource type, name and name type, where each field needs to be defined. A `ResourcePatternFilter` is used to select patterns during describe and delete operations.
The adminClient uses `AclBinding` and `AclBindingFilter`. These types have been switched over to use the new pattern types.
The AclCommands class, used by Kafka-acls.sh, has been converted to use the new pattern types.
The result is that the original `Resource` and `ResourceFilter` classes are not really used anywhere, except deprecated methods. However, the `Resource` class will be used if/when KIP-50 is done.
Reviewers: Colin Patrick McCabe <colin@cmccabe.xyz>, Jun Rao <junrao@gmail.com>
Currently, the replication factor is hardcoded to a value of 3. This means that we cannot use a DLQ in any cluster setup with less than three brokers. It is better to have the user specify this value if the default value does meet the requirements.
Testing: A unit test is added.
Signed-off-by: Arjun Satish <arjunconfluent.io>
Author: Arjun Satish <arjun@confluent.io>
Reviewers: Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5145 from wicknicks/KAFKA-7002
In addition to Gradle, updated snappy, owasp-dependency-check,
apache directory service api.
Gradle 4.8 fixes a fatal issue when building with Java 11, but
full support is coming in 4.9 or later.
Fix ServiceLoader issue with PluginClassLoader and add basic-auth-extension packaging & classpath
*More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.*
*Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.*
Author: Magesh Nandakumar <magesh.n.kumar@gmail.com>
Reviewers: Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5135 from mageshn/KAFKA-6991
Adding configuration to StreamsConfig allowing for making topology optimization optional.
Added unit tests are verifying default values, setting correct value and failure on invalid values.
Reviewers: John Roesler <john@confluent.io>, Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
* KAFKA-6993: Fix defective documentations for KStream/KTable methods
1. Fix the documentation of following methods, e.g., making more detailed description for the overloaded methods:
- KStream#join
- KStream#leftJoin
- KStream#outerJoin
- KTable#filter
- KTable#filterNot
- KTable#mapValues
- KTable#transformValues
- KTable#join
- KTable#leftJoin
- KTable#outerJoin
2. (trivial) with possible new type -> with possibly new type.
Reviewers: Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <guozhang@confluent.io>, Bill Bejeck <bill@confluent.io>
This is a small change to use the Java ServiceLoader to load ConfigProvider plugins. It uses code added by mageshn for Connect Rest Extensions.
Author: Robert Yokota <rayokota@gmail.com>
Reviewers: Magesh Nandakumar <magesh.n.kumar@gmail.com>, Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5141 from rayokota/service-loader-for-config-plugins
Reviewers: Colin Patrick McCabe <colin@cmccabe.xyz>, Jun Rao <junrao@gmail.com>
Co-authored-by: Piyush Vijay <pvijay@apple.com>
Co-authored-by: Andy Coates <big-andy-coates@users.noreply.github.com>
- CreateTopicsRequest now requires Create auth on Topic resource
or Create on Cluster resource.
- AclCommand --producer option adjusted
- Existing unit and Integration tests adjusted accordingly and
new tests added.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Co-authored-by: Edoardo Comar <ecomar@uk.ibm.com>
Co-authored-by: Mickael Maison <mickael.maison@gmail.com>
Reviewers: Viktor Somogyi <viktorsomogyi@gmail.com>, Vahid Hashemian <vahidhashemian@us.ibm.com>, Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Move the error handling configuration properties into the ConnectorConfig and SinkConnectorConfig classes, and refactor the tests and classes to use these new properties.
Testing: Unit tests and running the connect-standalone script with a file sink connector.
Author: Arjun Satish <arjun@confluent.io>
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Konstantine Karantasis <konstantine@confluent.io>, Magesh Nandakumar <magesh.n.kumar@gmail.com>, Robert Yokota <rayokota@gmail.com>, Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5125 from wicknicks/KAFKA-6981
While using an iterator from IQ, it's possible to get an InvalidStateStoreException if the StreamThread closes the store during a range query.
Added a unit test to SegmentIteratorTest for this condition.
Reviewers: John Roesler <john@confluent.io>, Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
PrincipalBuilder implementations can now take the listener into account
when creating the Principal. This is especially interesting in deployments
where inter-broker traffic is on a different listener than client traffic or
when the same protocol is used by multiple listeners.
The change in itself is mostly "plumbing" as the listener name needs to be
passed from ChannelBuilders all the way down to all classes implementing
AuthenticationContext.
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>, Ismael Juma <ismael@juma.me.uk>
Co-authored-by: Edoardo Comar <ecomar@uk.ibm.com>
Co-authored-by: Mickael Maison <mickael.maison@gmail.com>
#4919 unintentionally changed the topology naming scheme. This change returns to the prior scheme.
Reviewers: Bill Bejeck <bill@confluent.io>, Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Adding checks on "version" field for tools using it.
This is a new version of the closed PR #3887 (to see for more comments and related discussion).
Author: Paolo Patierno <ppatierno@live.com>
Reviewers: Dong Lin <lindong28@gmail.com>
Closes#5126 from ppatierno/kafka-5919-update
We added logic to reassign nodes in callToSend after a connection failure, but we do not handle the case when there is no node currently available to reassign the request to. This can happen when using MetadataUpdateNodeIdProvider if all of the known nodes are blacked out awaiting the retry backoff. To fix this, we need to ensure that the call is added to pendingCalls if a new node cannot be found.
This patch implements KIP-281, which adds a configurable timeout to the consumer performance tool with a default value of 10 seconds. The old timeout was hard-coded as 1 second. Additionally, this patch adds a warning message when the tool exits after a timeout rather than returning silently.
Reviewers: Dhruvil Shah <dhruvil@confluent.io>, Jason Gustafson <jason@confluent.io>
This reverts commit c9ec292135 (#5011). That
commit introduces an anonymous inner class which retains a
reference to the non-serializable outer class `KafkaMbean`
breaking Serialization. This means that reading JMX metrics
via JConsole or JmxTool no longer works since RMI relies
on Java Serialization.
Reviewers: Jason Gustafson <jason@confluent.io>, Dong Lin <lindong28@gmail.com>, Ismael Juma <ismael@juma.me.uk>
- Removed internal kafka.admin.AdminClient.deleteRecordsBefore since it's
no longer used.
- Removed redundant tests and rewrote non redundant ones to use the Java
AdminClient.
Reviewers: Viktor Somogyi <viktor.somogyi@cloudera.com>, Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Changes to keep the operation name as is and make the sensor name unique.
Reviewers: John Roesler <john@confluent.io>, Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
Specifying an invalid config (i.e. something other than `CreateTime` or
`LogAppendTime`) via `TopicCommand` would previously cause the
broker to fail on start-up.
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
In #4919 we propagate the SerDes for each of these aggregation operators.
As @guozhangwang mentioned in that PR:
```
reduce: inherit the key and value serdes from the parent XXImpl class.
count: inherit the key serdes, enforce setting the Serdes.Long() for value serdes.
aggregate: inherit the key serdes, do not set for value serdes internally.
```
Although it's all good for reduce and count, it is quiet unsafe to have aggregate without Materialized given. In fact I don't see why we would not give a Materialized for the aggregate since the result type will always be different (otherwise use reduce) and also the value Serde is simply not propagated.
This has been discussed previously in a broader PR before but I believe for aggregate we could pass implicitly a Materialized the same way we pass a Joined, just to avoid the stupid case. Then if the user wants to specialize, he can give his own Materialized.
Reviewers: Debasish Ghosh <dghosh@acm.org>, Guozhang Wang <guozhang@confluent.io>
Implementation for lazy down-conversion in a chunked manner for efficient memory usage during down-conversion. This pull request is mainly to get initial feedback on the direction of the patch. The patch includes all the main components from KIP-283.
Reviewers: Jason Gustafson <jason@confluent.io>
This patch contains a few follow-up improvements/cleanup for KIP-266:
- Add upgrade notes
- Add missing `commitSync(Duration)` API
- Improve timeout messages and fix some naming inconsistencies
- Various small cleanups
Reviewers: John Roesler <john@confluent.io>, Guozhang Wang <wangguoz@gmail.com>
This commit allows secrets in Connect configs to be externalized and replaced with variable references of the form `${provider:[path:]key}`, where the "path" is optional.
There are 2 main additions to `org.apache.kafka.common.config`: a `ConfigProvider` and a `ConfigTransformer`. The `ConfigProvider` is an interface that allows key-value pairs to be provided by an external source for a given "path". An a TTL can be associated with the key-value pairs returned from the path. The `ConfigTransformer` will use instances of `ConfigProvider` to replace variable references in a set of configuration values.
In the Connect framework, `ConfigProvider` classes can be specified in the worker config, and then variable references can be used in the connector config. In addition, the herder can be configured to restart connectors (or not) based on the TTL returned from a `ConfigProvider`. The main class that performs restarts and transformations is `WorkerConfigTransformer`.
Finally, a `configs()` method has been added to both `SourceTaskContext` and `SinkTaskContext`. This allows connectors to get configs with variables replaced by the latest values from instances of `ConfigProvider`.
Most of the other changes in the Connect framework are threading various objects through classes to enable the above functionality.
Author: Robert Yokota <rayokota@gmail.com>
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Randall Hauch <rhauch@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5068 from rayokota/KAFKA-6886-connect-secrets
This PR implements the features described in this KIP: https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect
This PR changes the Connect framework to allow it to automatically deal with errors encountered while processing records in a Connector. The following behavior changes are introduced here:
**Retry on Failure**: Retry the failed operation a configurable number of times, with backoff between each retry.
**Task Tolerance Limits**: Tolerate a configurable number of failures in a task.
We also add the following ways to report errors, along with sufficient context to simplify the debugging process:
**Log Error Context**: The error information along with processing context is logged along with standard application logs.
**Dead Letter Queue**: Produce the original message into a Kafka topic (applicable only to sink connectors).
New **metrics** which will monitor the number of failures, and the behavior of the response handler are added.
The changes proposed here **are backward compatible**. The current behavior in Connect is to kill the task on the first error in any stage. This will remain the default behavior if the connector does not override any of the new configurations which are provided as part of this feature.
Testing: added multiple unit tests to test the retry and tolerance logic.
Author: Arjun Satish <arjun@confluent.io>
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Magesh Nandakumar <magesh.n.kumar@gmail.com>, Randall Hauch <rhauch@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5065 from wicknicks/KAFKA-6378
Currently, the AbstractStream class defines a copy-constructor that allow to extend KStream and KTable APIs with new methods without impacting the public interface.
However adding new processor or/and store to the topology is made throught the internalTopologyBuilder that is not accessible from AbstractStream subclasses defined outside of the package (package visibility).
Matthias J. Sax <matthias@confluent.io>, Guozhang Wang <guozhang@confluent.io>