leverage fix from KAFKA-2690 to remove secrets from task logging
Author: rnpridgeon <ryan.n.pridgeon@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2115 from rnpridgeon/KAFKA-4364
Kafka Connect REST API does not handle in many places connectors with slashes in their names because it expects PathParams, this PR intends to :
* Reject as bad requests API calls trying to create connectors with slashes in their names
* Add support for connector with slashes in their names in the DELETE part of the API to allow users to cleanup their connectors without dropping everything.
This PR adds as well the Unit Test needed for the creation part and was tested manually for the DELETE part.
Author: Olivier Girardot <o.girardot@lateral-thoughts.com>
Reviewers: Shikhar Bhushan <shikhar@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2096 from ogirardot/fix/connectors-with-slashes-cannot-be-deleted
When storing a non-primitive type in a Connect offset, the following NullPointerException will occur:
```
07:18:23.702 [pool-3-thread-1] ERROR o.a.k.c.storage.OffsetStorageWriter - CRITICAL: Failed to serialize offset data, making it impossible to commit offsets under namespace tenant-db-bootstrap-source. This likely won't recover unless the unserializable partition or offset information is overwritten.
07:18:23.702 [pool-3-thread-1] ERROR o.a.k.c.storage.OffsetStorageWriter - Cause of serialization failure:
java.lang.NullPointerException: null
at org.apache.kafka.connect.storage.OffsetUtils.validateFormat(OffsetUtils.java:51)
at org.apache.kafka.connect.storage.OffsetStorageWriter.doFlush(OffsetStorageWriter.java:143)
at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:319)
... snip ...
```
The attached patch fixes the specific case where OffsetUtils.validateFormat is attempting to provide a useful error message, but fails to because the schemaType method could return null.
This contribution is my original work and I license the work to the project under the project's open source license.
Author: Mathieu Fenniak <mathieu.fenniak@replicon.com>
Reviewers: Gwen Shapira
Closes#2087 from mfenniak/fix-npr-with-clearer-error-message
There should be only one cases where these clean-ups have a functional impact: replaced repeated identical logs with a single log for the stale controller epoch case.
The rest should just make the code easier to read and make it a bit less wasteful. I did this exercise because unused variables sometimes mask bugs.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#1985 from ijuma/remove-unused
And improve readability by adding proper punctuations.
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#2002 from vahidhashemian/doc/fix_typos
Cleaner to just check once for optional & default value from the `convertToConnect()` function.
It also helps address an issue with conversions for logical type schemas that have default values and null as the included value. That test case is _probably_ not an issue in practice, since when using the `JsonConverter` to serialize a missing field with a default value, it will serialize the default value for the field. But in the face of JSON data streaming in from a topic being [generous on input, strict on output](http://tedwise.com/2009/05/27/generous-on-input-strict-on-output) seems best.
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Randall Hauch <rhauch@gmail.com>, Jason Gustafson <jason@confluent.io>
Closes#1872 from shikhar/kafka-4183
The `JsonConverter` class has `LogicalTypeConverter` implementations for Date, Time, Timestamp, and Decimal, but these implementations fail when the input literal value (deserialized from the message) is null.
Test cases were added to check for these cases, and these failed before the `LogicalTypeConverter` implementations were fixed to consider whether the schema has a default value or is optional, similarly to how the `JsonToConnectTypeConverter` implementations do this. Once the fixes were made, the new tests pass.
Author: Randall Hauch <rhauch@gmail.com>
Reviewers: Shikhar Bhushan <shikhar@confluent.io>, Jason Gustafson <jason@confluent.io>
Closes#1867 from rhauch/kafka-4183
Invoke the statusListener.onFailure() callback on start failures so that the statusBackingStore is updated. This involved a fix to the putSafe() functionality which prevented any update that was not preceded by a (non-safe) put() from completing, so here when a connector or task is transitioning directly to FAILED.
Worker start methods can still throw if the same connector name or task ID is already registered with the worker, as this condition should not happen.
Author: Shikhar Bhushan <shikhar@confluent.io>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1778 from shikhar/distherder-stayup-take4
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
Closes#1627 from hachikuji/KAFKA-3888
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Jason Gustafson, Gwen Shapira
Closes#1727 from ewencp/kafka-3847-per-task-producers and squashes the following commits:
7d39724 [Ewen Cheslack-Postava] Add timeout for closing producers.
98ec7f6 [Ewen Cheslack-Postava] KAFKA-3847: Use a separate producer per source task
ewencp I went down the list of connect configs and it looks like only the internal converter configs are mismarked. It looks like the `cluster` config that is present in the current docs is already gone. The only other values I can see arguing to change importance on are the ssl configs (marked high) but they are consistent with the producer/consumer config docs so that's at least consistent. Everything else marked high looks either mandatory or requires consideration in a production deployment to me.
Author: Dustin Cote <dustin@confluent.io>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1653 from cotedm/KAFKA-2932
Fix the test by using a more liberal timeout and forcing more frequent SinkTask.put() calls. Also add some logging to aid future debugging.
Author: Ewen Cheslack-Postava <me@ewencp.org>
Reviewers: Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>
Closes#1663 from ewencp/kafka-3935-fix-restart-system-test
Was just reading kafka source code, my favourite Friday afternoon activity, when I found these small grammatical errors in some `DataException` messages.
Could someone please review? ewencp dguy
Author: Laurier Mantel <laurier.mantel@shopify.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1551 from LaurierMantel/maps-typos
And not the containing struct's default value.
The contribution is my original work and that I license the work to the project under the project's open source license.
ewencp
Author: Rollulus <roelboel@xs4all.nl>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1528 from rollulus/kafka-3864
ExecutorService needs to be shutdown on close, lest a zombie thread
prevent clean shutdown.
ewencp
Author: Peter Davis <peter.davis@expeditors.com>
Reviewers: Liquan Pei <liquanpei@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1383 from davispw/KAFKA-3710
Author: Christian Posta <christian.posta@gmail.com>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1401 from christian-posta/ceposta-connect-class-cast-error
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Grant Henke <granthenke@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#1322 from hachikuji/KAFKA-3659