During actions such as a reconfiguration, the task configs are obtained
via `Worker.connectorTaskConfigs` and then subsequently saved into an
instance of `ClusterConfigState`. The values of the properties that are saved
are post-transformation (of variable references) when they should be
pre-transformation. This is to avoid secrets appearing in plaintext in
the `connect-configs` topic, for example.
The fix is to change the 2 clients of `Worker.connectorTaskConfigs` to
perform a reverse transformation (values converted back into variable
references) before saving them into an instance of `ClusterConfigState`.
The 2 places where the save is performed are
`DistributedHerder.reconfigureConnector` and
`StandaloneHerder.updateConnectorTasks`.
The way that the reverse transformation works is by using the
"raw" connector config (with variable references still intact) from
`ClusterConfigState` to convert config values back into variable
references for those keys that are common between the task config
and the connector config.
There are 2 additional small changes that only affect `StandaloneHerder`:
1) `ClusterConfigState.allTasksConfigs` has been changed to perform a
transformation (resolution) on all variable references. This is
necessary because the result of this method is compared directly to
`Worker.connectorTaskConfigs`, which also has variable references
resolved.
2) `StandaloneHerder.startConnector` has been changed to match
`DistributedHerder.startConnector`. This is to fix an issue where
during `StandaloneHerder.restartConnector`, the post-transformed
connector config would be saved back into `ClusterConfigState`.
I also performed an analysis of all other code paths where configs are
saved back into `ClusterConfigState` and did not find any other
issues.
Author: Robert Yokota <rayokota@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5475 from rayokota/KAFKA-7242-reverse-xform-props
Sender/RecordAccumulator never resets the next batch expiry time. Its always computed as the min of the current value and the expiry time for all batches being processed. This means that its always set to the expiry time of the first batch, and once that time has passed Sender starts spinning on epoll with a timeout of 0, which consumes a lot of CPU. This patch updates Sender to reset the next batch expiry time on each poll loop so that a new value reflecting the expiry time for the current set of batches is computed.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
1) As titled, add a rewriteTopology that 1) sets application id, 2) maybe disable caching, 3) adjust for source KTable. This optimization can hence be applied for both DSL or PAPI generated Topology.
2) Defer the building of globalStateStores in rewriteTopology so that we can also disable caching. But we still need to build the state stores before InternalTopologyBuilder.build() since we should only build global stores once for all threads.
3) Added withCachingDisabled to StoreBuilder, it is a public API change.
4) [Optional] Fixed unit test config setting functionalities, and set the necessary config to shorten the unit test latency (now it reduces from 5min to 3.5min on my laptop).
Reviewers: Matthias J. Sax <matthias@confluent.io>, John Roesler <john@confluent.io>, Bill Bejeck <bill@confluent.io>, Ted Yu <yuzhihong@gmail.com>
We should check TxnOffsetCommit responses for the COORDINATOR_LOADING_IN_PROGRESS error code and retry if we see it. Additionally, if we encounter an abortable error, we need to ensure that pending transaction offset commits are cleared.
Reviewers: Viktor Somogyi <viktorsomogyi@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
Ensure that sends are completed before waiting for channel to be closed based on idle expiry, since channel will not be expired if added to ready keys in the next poll as a result of pending sends.
Reviewers: Jun Rao <junrao@gmail.com>
Use a volatile field to track the size of the set of assigned partitions to avoid the concurrent access to the underlying linked hash map.
Reviewers: Vahid Hashemian <vahidhashemian@us.ibm.com>, Rajini Sivaram <rajinisivaram@googlemail.com>, Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
In AK's documentation, the config props for connectors are not listed (https://kafka.apache.org/documentation/#connectconfigs). This PR adds these sink and source connector configs to the html site-docs.
Signed-off-by: Arjun Satish <arjunconfluent.io>
Author: Arjun Satish <arjun@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#5469 from wicknicks/add-connector-configs-to-docs
We currently do a lot of bookkeeping for timeouts which is both error-prone and distracting. This patch adds a new `Timer` class to simplify this logic and control unnecessary calls to system time. In particular, this helps with nested timeout operations. The consumer has been updated to use the new class.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
Currently Fetcher.getTopicMetadata() will not include offline partitions. Thus
KafkaConsumer.partitionsFor(topic) will not return all partitions of a topic if
there if any partition of the topic is offline. This causes problem if user
tries to query the total number of partitions of the given topic.
Author: radai-rosenblatt <radai.rosenblatt@gmail.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Rajini Sivaram <rajinisivaram@googlemail.com>
Closes#4679 from radai-rosenblatt/partition_shenanigans
Fixed incorrect use of default timeout instead of the argument explicitly passed to `newClientRequest`.
Reviewers: Ron Dagostino <rndgstn@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Use delivery timeout instead of retries when possible and remove various TODOs associated with completion of KIP-91.
Reviewers: Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
This patch forces metadata update for consumers with pattern subscription at the beginning of rebalance (retry.backoff.ms is respected). This is to prevent such consumers from detecting subscription changes (e.g., new topic creation) independently and triggering multiple unnecessary rebalances. KAFKA-7126 contains detailed scenarios and rationale.
Author: Jon Lee <jonlee@linkedin.com>
Reviewers: Jason Gustafson <jason@confluent.io>, Ted Yu <yuzhihong@gmail.com>, Dong Lin <lindong28@gmail.com>
Closes#5408 from jonlee2/KAFKA-7126
An untimely wakeup can cause ConsumerCoordinator.onJoinComplete to throw a WakeupException before completion. On the next poll(), it will be retried, but this leads to an underflow error because the buffer containing the assignment data will already have been advanced. The solution is to duplicate the buffer passed to onJoinComplete.
Reviewers: Konstantine Karantasis <konstantine@confluent.io>, Rajini Sivaram <rajinisivaram@googlemail.com>
After successful completion of KafkaProducer#close, it is possible that an application calls KafkaProducer#send. If the send is invoked for a topic for which we do not have any metadata, the producer will block until `max.block.ms` elapses - we do not expect to receive any metadata update in this case because Sender (and NetworkClient) has already exited. It is only when RecordAccumulator#append is invoked that we notice that the producer has already been closed and throw an exception. If `max.block.ms` is set to Long.MaxValue (or a sufficiently high value in general), the producer could block awaiting metadata indefinitely.
This patch makes sure `Metadata#awaitUpdate` periodically checks if the network client has been closed, and if so bails out as soon as possible.
The log line says `ms`, but the actual value could represent a
different time unit depending on what the user provided.
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
SslTransportLayer currently closes the SSL engine and logs a warning if close_notify message canot be sent because the remote end closed its connection. This tends to fill up broker logs, especially when using clients which close connections immediately. Since this log entry is not very useful anyway, it would be better to log at debug level.
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
The SASL/OAUTHBEARER client response as currently implemented in OAuthBearerSaslClient sends the valid gs2-header "n,," but then sends the "auth" key and value immediately after it.
This does not conform to the specification because there is no %x01 after the gs2-header, no %x01 after the auth value, and no terminating %x01. Fixed this and the parsing of the client response in
OAuthBearerSaslServer, which currently allows the malformed text. Also updated to accept and ignore unknown properties as required by the spec.
Reviewers: Stanislav Kozlovski <familyguyuser192@windowslive.com>, Rajini Sivaram <rajinisivaram@googlemail.com>
SSL `close_notify` from broker connection close was processed as a handshake failure in clients while unwrapping the message if a handshake is in progress. Updated to handle this as a retriable IOException rather than a non-retriable SslAuthenticationException to avoid authentication exceptions in clients during rolling restart of brokers.
Reviewers: Ismael Juma <ismael@juma.me.uk>
…egal char and generates InvalidTopicException
If config parameter max.block.ms config parameter is set to a non-zero value,
KafkaProducer.send() blocks for the max.block.ms time if topic name has illegal
char or is invalid.
Wrote a unit test that verifies the appropriate exception is returned when
performing a get on the returned future by KafkaProducer.send().
Author: Ahmed Al Mehdi <aalmehdi@aalmehdi-ld1.linkedin.biz>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Joel Koshy <jjkoshy@gmail.com>, Manikumar Reddy O <manikumar.reddy@gmail.com>
Closes#5247 from ahmedha/KAFKA-5098
Add some additional size validation to prevent overflows when using `FileRecords`.
Reviewers: Dhruvil Shah <dhruvil@confluent.io>, Ismael Juma <ismael@juma.me.uk>
We need to use the same lock for metric update and read to avoid NPE and concurrent modification exceptions. Sensor add/remove/update are synchronized on Sensor since they access lists and maps that are not thread-safe. Reporters are notified of metrics add/remove while holding (Sensor, Metrics) locks and reporters may synchronize on the reporter lock. Metric read may be invoked by metrics reporters while holding a reporter lock. So read/update cannot be synchronized using Sensor since that could lead to deadlock. This PR introduces a new lock in Sensor for update/read.
Locking order:
- Sensor#add: Sensor -> Metrics -> MetricsReporter
- Metrics#removeSensor: Sensor -> Metrics -> MetricsReporter
- KafkaMetric#metricValue: MetricsReporter -> Sensor#metricLock
- Sensor#record: Sensor -> Sensor#metricLock
Reviewers: Jun Rao <junrao@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
KAFKA-6986:Export Admin Client metrics through Stream Threads
We already exported producer and consumer metrics through KafkaStreams class:
#4998
It makes sense to also export the Admin client metrics.
I didn't add a separate unittest case for this. Let me know if it's needed.
This is my first contribution, feel free to point out any mistakes that I did.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
There are cases where the broker would return an unresolve-able address (e.g broker inside a docker network while client is outside) and the client would not log any information as to why it is timing out, since the default log level does not print DEBUG messages.
Changing this log level will enable easier troubleshooting in such circumstances. This change does not change the logs shown on transient failures like a broker failure.
We need to ensure that the last poll time is always updated when the user call poll(Duration). This patch fixes a bug in the new KIP-266 timeout behavior which would cause this to be skipped if the coordinator could not be found while the consumer was in an active group.
Note that I've also fixed some type inconsistencies for various timeouts.
Reviewers: Guozhang Wang <wangguoz@gmail.com>
Previously, the connection-creation metric only accounted for opened connections from the broker. This change extends it to account for received connections.
They rely on finalizers (before Java 11), which create
unnecessary GC load. The alternatives are as easy to
use and don't have this issue.
Also use FileChannel directly instead of retrieving
it from RandomAccessFile whenever possible
since the indirection is unnecessary.
Finally, add a few try/finally blocks.
Reviewers: Colin Patrick McCabe <colin@cmccabe.xyz>, Rajini Sivaram <rajinisivaram@googlemail.com>
Leftover threads doing network I/O can interfere with subsequent tests. Add missing shutdown in tests and include admin client in the check for leftover threads.
Reviewers: Anna Povzner <anna@confluent.io>, Dhruvil Shah <dhruvil@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Manikumar Reddy O <manikumar.reddy@gmail.com>