Improve consumer metric collection by collecting and recording metrics per topic.
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Jason Gustafson <jason@confluent.io>
Closes#1684 from vahidhashemian/KAFKA-4000
There were a couple of important issues fixed in Gradle 3.2.1:
* [GRADLE-3582] - Gradle wrapper fails to escape arguments with nested quotes
* [GRADLE-3583] - Newlines in JAVA_OPTS breaks application plugin shell script in Gradle 3.2
And a lot of important issues fixed in Scala 2.12.1:
* http://www.scala-lang.org/news/2.12.1
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <me@ewencp.org>
Closes#2216 from ijuma/gradle-3.2.1-and-scala-2.12.1
Collecting socket server metrics during shutdown may throw NullPointerException
Author: Xavier Léauté <xavier@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#2221 from xvrl/fix-metrics-npe-on-shutdown
Author: Guozhang Wang <wangguoz@gmail.com>
Reviewers: Damian Guy <damian.guy@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2121 from guozhangwang/K4392-race-dir-cleanup
The NamedCache wasn't correctly dealing with its re-entrant nature. This would result in the LRU becoming corrupted, and the above exception occurring during eviction. For example:
Cache A: dirty key 1
eviction runs on Cache A
Node for key 1 gets marked as clean
Entry for key 1 gets flushed downstream
Downstream there is a processor that also refers to the table fronted by Cache A
Downstream processor puts key 2 into Cache A
This triggers an eviction of key 1 again ( it is still the oldest node as hasn't been removed from the LRU)
As the Node for key 1 is clean flush doesn't run and it is immediately removed from the cache.
So now we have dirtyKey set with key =1, but the value doesn't exist in the cache.
Downstream processor tries to put key = 1 into the cache, it fails as key =1 is in the dirtyKeySet.
Author: Damian Guy <damian.guy@gmail.com>
Reviewers: Eno Thereska, Guozhang Wang
Closes#2226 from dguy/cache-bug
Author: Dong Lin <lindong28@gmail.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jiangjie Qin <becket.qin@gmail.com>
Closes#2170 from lindong28/KAFAK-4445
Resolves
KAFKA-4306: Connect workers won't shut down if brokers are not available
KAFKA-4154: Kafka Connect fails to shutdown if it has not completed startup
Author: Konstantine Karantasis <konstantine@confluent.io>
Reviewers: Shikhar Bhushan <shikhar@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2201 from kkonstantine/KAFKA-4306-Connect-workers-will-not-shut-down-if-brokers-are-not-available
Instead of throwing `UnsupportedOperationException` from `StandbyTask.recordCollector()` return a No-op implementation of `RecordCollector`.
Refactored `RecordCollector` to have an interface and impl.
Author: Damian Guy <damian.guy@gmail.com>
Reviewers: Eno Thereska, Guozhang Wang
Closes#2212 from dguy/standby-task
Fix possible integer overflow.
Author: Kim Christensen <kich@mvno.dk>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#2200 from kichristensen/MiscalculatedOffsetRetention
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Ismael Juma, Jun Rao, Jiangjie Qin, Guozhang Wang
Closes#2195 from hachikuji/KAFKA-3994-linked-queue
NPE was caused by `log.logSegments.toArray` resulting in array containing `null` values. The exact reason still remains somewhat a mystery to me, but it seems that the culprit is `JavaConverters` in combination with concurrent data structure access.
Here's a simple code example to prove that:
```scala
import java.util.concurrent.ConcurrentSkipListMap
// Same as `JavaConversions`, but allows explicit conversions via `asScala`/`asJava` methods.
import scala.collection.JavaConverters._
case object Value
val m = new ConcurrentSkipListMap[Int, Value.type]
new Thread { override def run() = { while (true) m.put(9000, Value) } }.start()
new Thread { override def run() = { while (true) m.remove(9000) } }.start()
new Thread { override def run() = { while (true) { println(m.values.asScala.toArray.headOption) } } }.start()
```
Running the example will occasionally print `Some(null)` indicating that there's something shady going on during `toArray` conversion.
`null`s magically disappear by making the following change:
```diff
- println(m.values.asScala.toArray.headOption)
+ println(m.values.asScala.toSeq.headOption)
```
Author: Anton Karamanov <ataraxer@yandex-team.ru>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
Closes#2204 from ataraxer/KAFKA-4205
Also:
* Make all implementations of `Time` thread-safe as they are accessed from multiple threads in some cases.
* Change default implementation of `MockTime` to use two separate variables for `nanoTime` and `currentTimeMillis` as they have different `origins`.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>, Shikhar Bhushan <shikhar@confluent.io>, Jason Gustafson <jason@confluent.io>, Eno Thereska <eno.thereska@gmail.com>, Damian Guy <damian.guy@gmail.com>
Closes#2095 from ijuma/kafka-2247-consolidate-time-interfaces
Author: Alexey Ozeritsky <aozeritsky@yandex-team.ru>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Jason Gustafson <jason@confluent.io>
Closes#2125 from resetius/KAFKA-4399
Removed stale comment left behind, minor fixes (UpdateMetadataRequest instead of MetadataUpdateRequest) and remove redundant comments.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jiangjie (Becket) Qin <becket.qin@gmail.com>, Dong Lin <lindong28@gmail.com>
Closes#2194 from ijuma/kafka-4443-minor-follow-up
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Ismael Juma, Dan Norwood, Xavier Léauté, Damian Guy, Michael G. Noll, Matthias J. Sax, Guozhang Wang
Closes#2135 from enothereska/KAFKA-3637-streams-state
Author: Jason Gustafson <jason@confluent.io>
Reviewers: Ismael Juma <ismael@juma.me.uk>, Guozhang Wang <wangguoz@gmail.com>
Closes#2190 from hachikuji/KAFKA-4469
Without this fix the new consumer fails to run on a 32-bit Windows OS.
Author: Vahid Hashemian <vahidhashemian@us.ibm.com>
Reviewers: Jason Gustafson, Guozhang Wang
Closes#2189 from vahidhashemian/KAFKA-4271
The last patch submitted by MayureshGharat (back in Dec 15) has been rebased to the latest trunk. I took care of a couple of test failures (MetricsTest) along the way. jjkoshy , granders , avianey , you may be interested in this PR.
Author: Sumant Tambe <sutambe@yahoo.com>
Author: Mayuresh Gharat <mgharat@mgharat-ld1.linkedin.biz>
Author: MayureshGharat <gharatmayuresh15@gmail.com>
Reviewers: Joel Koshy <jjkoshy.w@gmail.com>
Closes#1664 from sutambe/async-delete-topic
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Damian Guy <damian.guy@gmail.com>, Guozhang Wang <wangguoz@gmail.com>
Closes#2171 from enothereska/KAFKA-4427-topicgroups-with-no-tasks
Author: Dong Lin <lindong28@gmail.com>
Reviewers: Jiangjie Qin <becket.qin@gmail.com>, Jun Rao <junrao@gmail.com>
Closes#2168 from lindong28/KAFKA-4443
This reverts commit e035fc0395 for the
following reasons:
1. License files are missing causing local builds to fail during the
rat task (rat is not being run in Jenkins for some reason, filed
KAFKA-4459 for that)
2. It renames a number of system test files when there's a better
way to achieve the goal of running a subset of system tests to stay
under the Travis limit.
3. It adds the gradle wrapper binary even though this was removed
intentionally a while back.
A new PR will be submitted for KAFKA-4345 without the undesired
changes.
Author: Ismael Juma <ismael@juma.me.uk>
Reviewers: Jason Gustafson <jason@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>
Closes#2187 from ijuma/kafka-4345-revert
Author: Jun He <jun.he@airbnb.com>
Reviewers: Jiangjie (Becket) Qin <becket.qin@gmail.com>, Jun Rao <junrao@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2127 from jun-he/KAFKA-4384
As of now the ducktape tests that we have for kafka are not run for pull request. We can run these test using travis-ci. Here is a sample run:
https://travis-ci.org/raghavgautam/kafka/builds/170574293
Author: Raghav Kumar Gautam <raghav@apache.org>
Reviewers: Sriharsha Chintalapani <harsha@hortonworks.com>
Closes#2064 from raghavgautam/trunk
- bug-fix follow up
- Resetter fails if no intermediate topic is used because seekToEnd() commit ALL partitions to EOL
Author: Matthias J. Sax <matthias@confluent.io>
Reviewers: Michael G. Noll, Roger Hoover, Guozhang Wang
Closes#2138 from mjsax/kafka-4331-streams-resetter-bugfix
Fixes static initialization order dependency between KafkaConfig and LogConfig. jjkoshy please take a look.
Author: Sumant Tambe <sutambe@yahoo.com>
Reviewers: Ismael Juma <ismael@juma.me.uk>
Closes#2120 from sutambe/logconfig-static-init
Author: MayureshGharat <gharatmayuresh15@gmail.com>
Reviewers: Jiangjie Qin <becket.qin@gmail.com>, Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>, Jun Rao <junrao@gmail.com>
Closes#2116 from MayureshGharat/KAFKA-4362
Author: Eno Thereska <eno.thereska@gmail.com>
Reviewers: Damian Guy, Matthias J. Sax, Guozhang Wang
Closes#2133 from enothereska/KAFKA-4355-topic-not-found
Author: Antony Stubbs <antony.stubbs@gmail.com>
Reviewers: Eno Thereska <eno.thereska@gmail.com>, Ismael Juma <ismael@juma.me.uk>
Closes#2157 from astubbs/trunk