Added Connect metrics specific to source tasks, and builds upon #3864 and #3911 that have already been merged into `trunk`, and #3959 that has yet to be merged.
I'll rebase this PR when the latter is merged.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3975 from rhauch/kafka-5902
Adds new metrics to support health checks:
1. Error rates for each request type, per-error code
2. Request size and temporary memory size
3. Message conversion rate and time
4. Successful and failed authentication rates
5. ZooKeeper latency and status
6. Client version
Author: Rajini Sivaram <rajinisivaram@googlemail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#3705 from rajinisivaram/KAFKA-5746-new-metrics
Added Connect metrics specific to source tasks, and builds upon #3864 and #3911 that have already been merged into `trunk`.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: tedyu <yuzhihong@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3959 from rhauch/kafka-5901
Added metrics that are common to both sink and source tasks.
Marked as "**WIP**" since this PR is built upon #3864, and will need to be rebased once that has been merged into `trunk`. However, I would still appreciate initial reviews since this PR is largely additive.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3911 from rhauch/kafka-5900
This PR is the first of several subtasks for [KAFKA-2376](https://issues.apache.org/jira/browse/KAFKA-2376) to add metrics to Connect worker processes. See that issue and [KIP-196 for details](https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework).
This PR adds metrics for each connector using Kafka’s existing `Metrics` framework. This is the first of several changes to add several groups of metrics, this change starts by adding a very simple `ConnectMetrics` object that is owned by each worker and that makes it easy to define multiple groups of metrics, called `ConnectMetricGroup` objects. Each metric group maps to a JMX MBean, and each metric within the group maps to an MBean attribute.
Future PRs will build upon this simple pattern to add metrics for source and sink tasks, workers, and worker rebalances.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewencp@confluent.io>
Closes#3864 from rhauch/kafka-5899
Embed the type of connector in ConnectorInfo
Author: tedyu <yuzhihong@gmail.com>
Reviewers: Randall Hauch <rhauch@gmail.com>, Jason Gustafson <jason@confluent.io>, Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3812 from tedyu/trunk
Also:
1. Fix WorkerTest to use the correct `Mock` annotations. `org.easymock.Mock`
is not supported by PowerMock 2.x.
2. Rename `powermock` to `powermockJunit4` in `dependencies.gradle` for
clarity.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>
Closes#3881 from ijuma/kafka-5884-powermock-java
- EasyMock 3.5 supports Java 9.
- Fixed issues in `testFailedSendRetryLogic` and
`testCreateConnectorAlreadyExists` exposed by new EasyMock
version. The former was passing `anyObject` to
`andReturn`, which doesn't make sense. This was leaving
behind a global `any` matcher, which caused a few issues in
the new version. Fixing this meant that the correlation ids had
to be updated to actually match. The latter was missing a
couple of expectations that the previous version of EasyMock
didn't catch.
- Removed unnecessary PowerMock dependency from 3 tests.
- Disabled remaining PowerMock tests when running with Java 9
until https://github.com/powermock/powermock/issues/783 is
in a release.
- Once we merge this PR, we can enable tests in the Java 9 builds
in Jenkins.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Rajini Sivaram <rajinisivaram@googlemail.com>
Closes#3845 from ijuma/kafka-4501-easymock-powermock-java-9
This patch ensures that the consumer groupId and clientId are available in all log messages which makes debugging much easier when a single application has multiple consumer instances. To make this easier, I've added a new `LogContext` object which builds a log prefix similar to the broker-side `kafka.utils.Logging` mixin. Additionally this patch changes the log level for a couple minor cases:
- Consumer wakeup events are now logged at DEBUG instead of TRACE
- Heartbeat enabling/disabling is now logged at DEBUG instead of TRACE
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#3676 from hachikuji/log-consumer-wakeups
Prior to this change, it was possible for the synchronous consumer commit request to be handled before previously-submitted asynchronous commit requests. If that happened, the out-of-order handlers improperly set the last committed offsets, which then became inconsistent with the offsets the connector task is working with.
This change ensures that the last committed offsets are updated only for the most recent commit request, even if the consumer reorders the calls to the callbacks.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#3662 from rhauch/kafka-5731
This ensures all logs have the connector/task ID, whether tasks are source or sink, and formats them consistently.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Randall Hauch <rhauch@gmail.com>, Konstantine Karantasis <konstantine@confluent.io>, Jason Gustafson <jason@confluent.io>
Closes#3639 from ewencp/standardize-connector-task-logging
When a Connect distributed worker starts up talking with broker versions 0.10.1.0 and later, it will use the AdminClient to look for the internal topics and attempt to create them if they are missing. Although the AdminClient was added in 0.11.0.0, the AdminClient uses APIs to create topics that existed in 0.10.1.0 and later. This feature works as expected when Connect uses a broker version 0.10.1.0 or later.
However, when a Connect distributed worker starts up using a broker older than 0.10.1.0, the AdminClient is not able to find the required APIs and thus will throw an UnsupportedVersionException. Unfortunately, this exception is not caught and instead causes the Connect worker to fail even when the topics already exist.
This change handles the UnsupportedVersionException by logging a debug message and doing nothing. The existing producer logic will get information about the topics, which will cause the broker to create them if they don’t exist and broker auto-creation of topics is enabled. This is the same behavior that existed prior to 0.11.0.0, and so this change restores that behavior for brokers older than 0.10.1.0.
This change also adds a system test that verifies Connect works with a variety of brokers and is able to run source and sink connectors. The test verifies that Connect can read from the internal topics when the connectors are restarted.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3641 from rhauch/kafka-5704
Clean up includes:
- Switching try-catch-finally blocks to try-with-resources when possible
- Removing some seemingly unnecessary `SuppressWarnings` annotations
- Resolving some Java warnings
- Closing unclosed Closable objects
- Removing unused code
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Balint Molnar <balintmolnar91@gmail.com>, Guozhang Wang <wangguoz@gmail.com>, Matthias J. Sax <matthias@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
Closes#3222 from vahidhashemian/minor/code_cleanup_1706
More input validation for SchemaBuilder methods.
Author: Jeremy Custenborder <jcustenborder@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3474 from jcustenborder/KAFKA-5548
Kafka Connect was adding duplicate group names in the response from the REST API's validation of connector configurations. This fixes the duplicates and maintains the order of the `ConfigDef` objects so that the `ConfigValue` results are in the same order.
This is a blocker and should be merged to 0.11.0.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3379 from rhauch/KAFKA-5472
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Colin P. Mccabe <cmccabe@confluent.io>, Jason Gustafson <jason@confluent.io>
Closes#3339 from ijuma/kafka-5275-admin-client-api-consistency
Related to https://github.com/apache/kafka/pull/3321
Author: Konstantine Karantasis <konstantine@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3326 from kkonstantine/MINOR-Add-tests-for-PluginDesc
When the `SetSchemaMetadata` SMT is used to change the name and/or version of the key or value’s schema, any references to the old schema in the key or value must be changed to reference the new schema. Only keys or values that are `Struct` have such references, and so currently only these are adjusted.
This is based on `trunk` since the fix is expected to be targeted to the 0.11.1 release.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#3198 from rhauch/kafka-5164
- Added a boolean `allow_auto_topic_creation` to MetadataRequest and
bumped the protocol version to V4.
- When connecting to brokers older than 0.11.0.0, the `allow_auto_topic_creation`
field won't be considered, so we send a metadata request for all topics
to keep the behavior consistent.
- Set `allow_auto_topic_creation` to false in the new AdminClient and
StreamsKafkaClient (which exists for the purpose of creating topics
manually); set it to true everywhere else for now. Other clients will eventually
rely on client-side auto topic creation, but that’s not there yet.
- Add `allowAutoTopicCreation` field to `Metadata`, which is used by
`DefaultMetadataUpdater`. This is not strictly needed for the new
`AdminClient`, but it avoids surprises if it ever adds a topic to `Metadata`
via `setTopics` or `addTopic`.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jun Rao <junrao@gmail.com>
Closes#3098 from ijuma/kafka-5291-admin-client-no-auto-topic-creation
Summary:
- add `reconnect.backoff.max.ms` common client configuration parameter
- if `reconnect.backoff.max.ms` > `reconnect.backoff.ms`, apply an exponential backoff policy
- apply +/- 20% random jitter to smooth cluster reconnects
Author: Dana Powers <dana.powers@gmail.com>
Reviewers: Ewen Cheslack-Postava <me@ewencp.org>, Roger Hoover <roger.hoover@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#1523 from dpkp/exp_backoff
The backing store for offsets, status, and configs now attempts to use the new AdminClient to look up the internal topics and create them if they don’t yet exist. If the necessary APIs are not available in the connected broker, the stores fall back to the old behavior of relying upon auto-created topics. Kafka Connect requires a minimum of Apache Kafka 0.10.0.1-cp1, and the AdminClient can work with all versions since 0.10.0.0.
All three of Connect’s internal topics are created as compacted topics, and new distributed worker configuration properties control the replication factor for all three topics and the number of partitions for the offsets and status topics; the config topic requires a single partition and does not allow it to be set via configuration. All of these new configuration properties have sensible defaults, meaning users can upgrade without having to change any of the existing configurations. In most situations, existing Connect deployments will have already created the storage topics prior to upgrading.
The replication factor defaults to 3, so anyone running Kafka clusters with fewer nodes than 3 will receive an error unless they explicitly set the replication factor for the three internal topics. This is actually desired behavior, since it signals the users that they should be aware they are not using sufficient replication for production use.
The integration tests use a cluster with a single broker, so they were changed to explicitly specify a replication factor of 1 and a single partition.
The `KafkaAdminClientTest` was refactored to extract a utility for setting up a `KafkaAdminClient` with a `MockClient` for unit tests.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2984 from rhauch/kafka-4667
Also:
1. FindCoordinator is more general and takes a coordinator_type
so that it can be used for the group and transaction coordinators.
2. Include an error message in FindCoordinatorResponse to make the
errors at the client side more informative. We have just added the
field to the protocol in this PR, a subsequent PR will update the
code to use it.
3. Rename `Errors` names for FindCoordinator to be more generic. This
is a compatible change as the ids remain the same.
4. Since the exception classes for the error codes are in a public
package, we introduce new ones and deprecate the old ones.
The classes were not thrown back to the user (KAFKA-5052 aside),
so this is a compatible change.
5. Update InitPidRequest for transactions. Since this protocol API
was introduced recently and is not used by default, we did not bump
its version.
Author: Apurva Mehta <apurva@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#2825 from apurvam/exactly-once-rpc-stubs
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Damian Guy <damian.guy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2660 from ewencp/minor-make-configdef-safer
https://issues.apache.org/jira/browse/KAFKA-4810
> Currently SchemaBuilder is strict when checking that certain fields have not been set yet (e.g. version, name, doc). It just checks that the field is null. This is intended to protect the user from buggy code that overwrites a field with different values, but it's a bit too strict currently. In generic code for converting schemas (e.g. Converters) you will sometimes initialize a builder with these values (e.g. because you get a SchemaBuilder for a logical type, which sets name & version), but then have generic code for setting name & version from the source schema.
Changed the validation method to not only check if a field is null but also to check if the new value that is being set is the same as the current value of the field.
ewencp
Author: Vitaly Pushkar <vitaly.pushkar@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2806 from vitaly-pushkar/KAFKA-4810-schema-builder-default-fields-validation
Author: Colin P. Mccabe <cmccabe@confluent.io>
Reviewers: Konstantine Karantasis <konstantine@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2763 from cmccabe/KAFKA-4977